Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated thinking of optimization and statistics leads to fruitful research findings.


Introduction
Modern information processing and machine learning often have to deal with (structured) low-rank matrix factorization. Given a few observations y ∈ R m about a matrix M ∈ R n1×n2 of rank r min{n 1 , n 2 }, one seeks a low-rank solution compatible with this set of observations as well as other prior constraints. Examples include low-rank matrix completion [1][2][3], phase retrieval [4], blind deconvolution and self-calibration [5,6], robust principal component analysis [7,8], synchronization and alignment [9,10], to name just a few. A common goal of these problems is to develop reliable, scalable, and robust algorithms to estimate a low-rank matrix from potentially noisy, nonlinear, and highly incomplete observations.

Optimization-based methods
Towards this goal, arguably one of the most popular approaches is optimization-based methods. By factorizing a candidate solution M ∈ R n1×n2 as LR with low-rank factors L ∈ R n1×r and R ∈ R n2×r , one attempts recovery by solving an optimization problem in the form of Here, f (·) is certain empirical risk function (e.g. Euclidean loss, negative log-likelihood) that evaluates how well a candidate solution fits the observations, and the set C encodes additional prior constraints, if any. This problem is often highly nonconvex and appears daunting to solve to global optimality at first sight. After all, conventional wisdom usually perceives nonconvex optimization as a computationally intractable task that is susceptible to local minima. To bypass the challenge, one can do convex relaxation, an effective strategy that already enjoys theoretical success in addressing a large number of problems. The basic idea is to convexify the problem by, amongst others, dropping or replacing the low-rank constraint [cf. (1b)] by a nuclear norm constraint [1][2][3][11][12][13], and solving the convexified problem in the full matrix space (i.e. the space of M ). While such convex relaxation schemes exhibit intriguing performance guarantees in several aspects (e.g. near-minimal sample complexity, stability against noise), its computational cost often scales at least cubically in the size of the matrix M , which often far exceeds the time taken to read the data. In addition, the prohibitive storage complexity associated with the convex relaxation approach presents another hurdle that limits its applicability to large-scale problems.
This overview article focuses on provable low-rank matrix estimation based on nonconvex optimization. This approach operates over the parsimonious factorized representation [cf. (1b)] and optimizes the nonconvex loss directly over the low-rank factors L and R. The advantage is clear: adopting economical representation of the low-rank matrix results in low storage requirements, affordable per-iteration computational cost, amenability to parallelization, and scalability to large problem size, when performing iterative optimization methods like gradient descent. However, despite its wide use and remarkable performance in practice [14,15], the foundational understanding of generic nonconvex optimization is far from mature. It is often unclear whether an optimization algorithm can converge to the desired global solution and, if so, how fast this can be accomplished. For many nonconvex problems, theoretical underpinnings had been lacking until very recently.

Nonconvex optimization meets statistical models
Fortunately, despite general intractability, some important nonconvex problems may not be as hard as they seem. For instance, for several low-rank matrix factorization problems, it has been shown that: under proper statistical models, simple first-order methods are guaranteed to succeed in a small number of iterations, achieving low computational and sample complexities simultaneously (e.g. [16][17][18][19][20][21][22][23][24][25][26]). The key to enabling guaranteed and scalable computation is to concentrate on problems arising from specific statistical signal estimation tasks, which may exhibit benign structures amenable to computation and rule out undesired "hard" instances by focusing on the average-case performance. Two messages deserve particular attention when we examine the geometry of associated nonconvex loss functions: • Basin of attraction. For several statistical problems of this kind, there often exists a reasonably large basin of attraction around the global solution, within which an iterative method like gradient descent is guaranteed to be successful and converge fast. Such a basin might exist even when the sample complexity is quite close to the information-theoretic limit [16][17][18][19][20][21][22].
• Benign global landscape. Several problems provably enjoy benign optimization landscape when the sample size is sufficiently large, in the sense that there is no spurious local minima, i.e. all local minima are also global minima, and that the only undesired stationary points are strict saddle points [27][28][29][30][31].
These important messages inspire a recent flurry of activities in the design of two contrasting algorithmic approaches: • Two-stage approach. Motivated by the existence of a basin of attraction, a large number of works follow a two-stage paradigm: (1) initialization, which locates an initial guess within the basin; (2) iterative refinement, which successively refines the estimate without leaving the basin. This approach often leads to very efficient algorithms that run in time proportional to that taken to read the data.
• Saddle-point escaping algorithms. In the absence of spurious local minima, a key challenge boils down to how to efficiently escape undesired saddle points and find a local minimum, which is the focus of this approach. This approach does not rely on carefully-designed initialization.
The research along these lines highlights the synergy between statistics and optimization in signal processing and machine learning. The algorithmic choice often needs to properly exploit the underlying statistical models in order to be truly efficient, in terms of both statistical accuracy and computational efficiency.

This paper
Understanding the effectiveness of nonconvex optimization is currently among the most active areas of research in signal processing, machine learning, optimization and statistics. Many exciting new developments in the last several years have significantly advanced our understanding of this approach for various statistical problems. This article aims to provide a thorough technical overview of important recent results in this exciting area, targeting the broader signal processing, machine learning, statistics, and optimization communities.
The rest of this paper is organized as follows. Section 2 reviews some preliminary facts on optimization that are instrumental to understanding the materials in this paper. Section 3 uses a toy (but non-trivial) example (i.e. rank-1 matrix factorization) to illustrate why it is hopeful to solve a nonconvex problem to global optimality, through both local and global lenses. Section 4 introduces a few canonical statistical estimation problems that will be visited multiple times in the sequel. Section 5 and Section 6 review gradient descent and its many variants as a local refinement procedure, followed by a discussion of other methods in Section 7. Section 8 discusses the spectral method, which is commonly used to provide an initialization within the basin of attraction. Section 9 provides a global landscape analysis, in conjunction with algorithms that work without the need of careful initialization. We conclude the paper in Section 10 with some discussions and remarks. Furthermore, a short note is provided at the end of several sections to cover some historical remarks and provide further pointers.

Notations
It is convenient to introduce a few notations that will be used throughout. We use boldfaced symbols to represent vectors and matrices. For any vector v, we let v 2 , v 1 and v 0 denote its 2 , 1 , and 0 norm, respectively. For any matrix M , let M , M F , M * , M 2,∞ , and M ∞ stand for the spectral norm (i.e. the largest singular value), the Frobenius norm, the nuclear norm (i.e. the sum of the singular values), the 2 / ∞ norm (i.e. the largest 2 norm of the rows), and the entrywise ∞ norm (the largest magnitude of all entries), respectively. We denote by σ j (M ) (resp. λ j (M )) its jth largest singular value (resp. eigenvalue), and let M j,· (resp. M ·,j ) represent its jth row (resp. column). The condition number of M is denoted by κ(M ). In addition, M , M * and M indicate the transpose, the conjugate transpose, and the entrywise conjugate of M , respectively. For two matrices A and B of the same size, we define their inner product as A, B = Tr(A B), where Tr(·) stands for the trace. The matrix I n denotes the n × n identity matrix, and e i denotes the ith column of I n . For any linear operator A, denote by A * its adjoint operator. For example, if A maps X ∈ R n×n to [ A i , X ] 1≤i≤m , then A * (y) = m i=1 y i A i . We also let vec(Z) denote the vectorization of a matrix Z. The indicator function 1 A equals to 1 when the event A holds true, and 0 otherwise. Further, the notation O r×r denotes the set of r × r orthonormal matrices. Let A and B be two square matrices. We write A B (resp. A B) if their difference A − B is a positive definite (resp. positive semidefinite) matrix. Additionally, the standard notation f (n) = O (g(n)) or f (n) g(n) means that there exists a constant c > 0 such that |f (n)| ≤ c|g(n)|, f (n) g(n) means that there exists a constant c > 0 such that |f (n)| ≥ c |g(n)|, and f (n) g(n) means that there exist constants c 1 , c 2 > 0 such that c 1 |g(n)| ≤ |f (n)| ≤ c 2 |g(n)|.

Preliminaries in optimization theory
We start by reviewing some basic concepts and preliminary facts in optimization theory that play a vital role in the main development of the theory. For simplicity of presentation, this section focuses on an unconstrained problem minimize x∈R n f (x).
The optimal solution, if exists, is denoted by When f (x) is strictly convex, 1 x opt is unique. But it may be non-unique when f (·) is nonconvex.

Gradient descent for locally strongly convex functions
To solve (2), arguably the simplest method is (vanilla) gradient descent (GD), which follows the update rule Here, η t is the step size or learning rate at the tth iteration, and x 0 is the initial point. This method and its variants are widely used in practice, partly due to their simplicity and scalability to large-scale problems. A central question is when GD converges fast to the global minimum x opt . As is well-known in the optimization literature, GD is provably convergent at a linear rate when f (·) is (locally) strongly convex and smooth. Here, an algorithm is said to converge linearly if the error x t − x opt 2 converges to 0 as a geometric series. To formally state this result, we define two concepts that commonly arise in the optimization literature.
Definition 1 (Strong convexity). A twice continuously differentiable function f : R n → R is said to be α-strongly convex in a set B if Definition 2 (Smoothness). A twice continuously differentiable function f : R n → R is said to be β-smooth in a set B if With these in place, we have the following standard result: Lemma 1. Suppose that f is α-strongly convex and β-smooth within a local ball B ζ (x opt ) := {x : x − x opt 2 ≤ ζ}, and that x 0 ∈ B ζ (x opt ). If η t ≡ 1/β, then GD obeys Proof of Lemma 1. The optimality of x opt indicates that ∇f (x opt ) = 0, which allows to rewrite the GD update rule as Here, the second line arises from the fundamental theorem of calculus [32,Theorem 4.2]. If x t ∈ B ζ (x opt ), then it is self-evident that x(τ ) ∈ B ζ (x opt ), which combined with the assumption of Lemma 1 gives Therefore, as long as η t ≤ 1/β (and hence η t ∇ 2 f (x(τ )) ≤ 1), we have This together with the sub-multiplicativity of · yields By setting η t = 1/β, we arrive at the desired 2 error contraction, namely, A byproduct is: if x t ∈ B ζ (x opt ), then the next iterate x t+1 also falls in B ζ (x opt ). Consequently, applying the above argument repetitively and recalling the assumption x 0 ∈ B ζ (x opt ), we see that all GD iterates remain within B ζ (x opt ). Hence, (7) holds true for all t. This immediately concludes the proof.  Figure 1: An example of f (·) taken from [20].
This result essentially implies that: to yield ε-accuracy (in a relative sense), i.e. x t − x opt 2 ≤ ε x opt 2 , the number of iterations required for GD -termed iteration complexity -is at most if we initialize GD properly such that x 0 lies in the local region B ζ (x opt ). In words, the iteration complexity scales linearly with the condition number -the ratio β/α of smoothness to strong convexity parameters. As we shall see, for multiple problems considered herein, the radius ζ of this locally strongly convex and smooth ball B ζ (x opt ) can be reasonably large (e.g. on the same order of x 2 ).

Convergence under regularity conditions
Another condition that has been extensively employed in the literature is the Regularity Condition (RC), which accommodates algorithms beyond vanilla GD, as well as is applicable to possibly nonsmooth functions. Specifically, consider the iterative algorithm for some general mapping g(·) : R n → R n . In vanilla GD, g(x) = ∇f (x), but g(·) can also incorporate several variants of GD; see Section 6. The regularity condition is defined as follows.
In view of Lemma 2, the iteration complexity to reach ε-accuracy (i.e. x t − x opt 2 ≤ ε x opt 2 ) is at most as long as a suitable initialization is provided.

Critical points
An iterative algorithm like gradient descent often converges to one of its fixed points [34]. For gradient descent, the associated fixed points are (first-order) critical points or stationary points of the loss function, defined as follows.
Definition 4 (First-order critical points). A first-order critical point (stationary point) x of f (·) is any point that satisfies ∇f (x) = 0.
Moreover, we call a point x an ε-first-order critical point, for some ε > 0, if it satisfies ∇f (x) 2 ≤ ε.
A critical point can be a local minimum, a local maximum, or a saddle point of f (·), depending on the curvatures at / surrounding the point. Specifically, denote by ∇ 2 f (x) the Hessian matrix at x, and let λ min (∇ 2 f (x)) be its minimum eigenvalue. Then for any first-order critical point x: • if ∇ 2 f (x) ≺ 0, then x is a local maximum; • if ∇ 2 f (x)) 0, then x is a local minimum; • λ min (∇ 2 f (x)) = 0, then x is either a local minimum or a degenerate saddle point; Another concept that will be useful is second-order critical points, defined as follows.
Definition 5 (Second-order critical points). A point x is said to be a second-order critical point (stationary Clearly, second-order critical points do not encompass local maxima and strict saddle points, and, as we shall see, are of more interest for the nonconvex problems considered herein. Since we are interested in minimizing the loss function, we do not distinguish local maxima and strict saddle points.
3 A warm-up example: rank-1 matrix factorization For pedagogical reasons, we begin with a self-contained study of a simple nonconvex matrix factorization problem, demonstrating local convergence in the basin of attraction in Section 3.1 and benign global landscape in Section 3.2. The analysis in this section requires only elementary calculations. Specifically, consider a positive semidefinite matrix M ∈ R n×n (which is not necessarily low-rank) with eigendecomposition M = n i=1 λ i u i u i . We assume throughout this section that there is a gap between the 1st and 2nd largest eigenvalues The aim is to find the best rank-1 approximation. Clearly, this can be posed as the following problem: 2 where f (x) is a degree-four polynomial and highly nonconvex. The solution to (14) can be expressed in closed form as the scaled leading eigenvector ± √ λ 1 u 1 . See Fig. 2 for an illustration of the function f (x) when x ∈ R 2 . This problem stems from interpreting principal component analysis (PCA) from an optimization perspective, which has a long history in the literature of (linear) neural networks and unsupervised learning; see for example [35][36][37][38][39][40].
We attempt to minimize the nonconvex function f (·) directly in spite of nonconvexity. This problem, though simple, plays a critical role in understanding the success of nonconvex optimization, since several important nonconvex estimation problems can be regarded as randomized versions or extensions of this problem.

Local linear convergence of gradient descent
To begin with, we demonstrate that gradient descent, when initialized at a point sufficiently close to the true optimizer (i.e. ± √ λ 1 u 1 ), is guaranteed to converge fast.
. Then the GD iterates (4) obey Remark 1. By symmetry, Theorem 1 continues to hold if u 1 is replaced by −u 1 .
In a nutshell, Theorem 1 establishes linear convergence of GD for rank-1 matrix factorization, where the convergence rate largely depends upon the eigen-gap (relative to the largest eigenvalue). This is a local result, assuming that a suitable initialization is present in the basin of attraction B ζ ( √ λ 1 u 1 ). Its radius, which is not optimized in this theorem, is given by ζ = λ1−λ2 15λ1 √ λ 1 u 1 2 . This also depends on the relative eigen-gap (λ 1 − λ 2 )/λ 1 .
Proof of Theorem 1. The proof mainly consists of showing that f (·) is locally strongly convex and smooth, which allows us to invoke Lemma 1. The gradient and the Hessian of f (x) are given respectively by For notational simplicity, let ∆ : We start with the smoothness condition. The triangle inequality gives where the last line follows from (18).
Next, it comes from the definition of ∆ and (17) that Substitution into (16) yields a strong convexity lower bound: where the last inequality is an immediate consequence of (18).
Applying Lemma 1 establishes the claim.
The question then comes down to whether one can secure an initial guess of this quality. One popular approach is spectral initialization, obtained by computing the leading eigenvector of M . For this simple problem, this already yields a solution of arbitrary accuracy. As it turns out, such a spectral initialization approach is particularly useful when dealing with noisy and incomplete measurements. We refer the readers to Section 8 for detailed discussions on spectral initialization methods.

Global optimization landscape
We then move on to examining the optimization landscape of this simple problem. In particular, what kinds of critical points does f (x) have? This is addressed as follows.
Theorem 2. Consider the problem (14). All local minima of f (·) are global optima. The rest of the critical points are either local maxima or strict saddle points. To further categorize the critical points, we need to examine the Hessian matrices as given by (16). Regarding the critical points ± √ λ k u k , we have We can then categorize them as follows: 1. With regards to the points {± √ λ 1 u 1 }, one has and hence they are (equivalent) local minima of f (·); 2. For the points {± √ λ k u k } n k=2 , one has and therefore they are strict saddle points of f (·).
3. Finally, the critical point at the origin satisfies and is hence either a local maxima (if λ n > 0) or a strict saddle point (if λ n = 0).
This result reveals the benign geometry of the problem (14) amenable to optimization. All undesired fixed points of gradient descent are strict saddles, which have negative directional curvature and may not be difficult to escape or avoid.

Formulations of a few canonical problems
For a paper of this length, it is impossible to cover all nonconvex statistical problems of interest. Instead, we decide to focus on a few concrete and fundamental matrix factorization problems. This section presents formulations of several such examples that will be visited multiple times throughout this article. Unless otherwise noted, the assumptions made in this section (e.g. restricted isometry for matrix sensing, Gaussian design for phase retrieval) will be imposed throughout the rest of the paper.

Matrix sensing
Suppose that we are given a set of m measurements of M ∈ R n1×n2 of the form where {A i ∈ R n1×n2 } is a collection of sensing matrices known a priori. We are asked to recover M -which is assumed to be of rank r -from these linear matrix equations [12,41]. When M = X X ∈ R n×n is positive semidefinite, this can be cast as solving the least-squares problem Clearly, we cannot distinguish X from X H for any orthonormal matrix H ∈ O r×r , as they correspond to the same low-rank matrix X X = X HH X . This simple fact implies that there exist multiple global optima for (20), a phenomenon that holds for most problems discussed herein. For the general case where M = L R , we wish to minimize minimize L∈R n 1 ×r ,R∈R n 2 ×r Similarly, we cannot distinguish (L , R ) from (L H, R (H ) −1 ) for any invertible matrix H ∈ R r×r , as L R = L HH −1 R . Throughout the paper, we denote by the true low-rank factors, where M = U Σ V stands for its singular value decomposition. In order to make the problem well-posed, we need to make proper assumptions on the sensing operator A useful property for the sensing operator that enables tractable algorithmic solutions is the Restricted Isometry Property (RIP), which says that the operator preserves the Euclidean norm of the input matrix when restricted to the set of low-rank matrices. More formally: 42]). An operator A : R n1×n2 → R m is said to satisfy r-RIP with RIP constant δ r < 1 if holds simultaneously for all X of rank at most r.
As an immediate consequence, the inner product between two low-rank matrices is also nearly preserved if A satisfies the RIP. Therefore, A behaves like an isometry when restricting its operations over low-rank matrices.

Lemma 3 ([42]
). If an operator A satisfies 2r-RIP with RIP constant δ 2r < 1, then holds simultaneously for all X and Y of rank at most r.
Notably, many random sensing designs are known to satisfy the RIP with high probability, with one example given below.

Phase retrieval and quadratic sensing
Imagine that we have access to m quadratic measurements of a rank-1 matrix M := x x ∈ R n×n : where a i ∈ R n is the design vector known a priori. How can we reconstruct x ∈ R n -or equivalently, M = x x -from this collection of quadratic equations about x ? This problem, often dubbed as phase retrieval, arises for example in X-ray crystallography, where one needs to recover a specimen based on intensities of the diffracted waves scattered by the object [4,[43][44][45]. Mathematically, the problem can be posed as finding a solution to the following program More generally, consider the quadratic sensing problem, where we collect m quadratic measurements of a rank-r matrix M := X X with X ∈ R n×r : This subsumes phase retrieval as a special case, and comes up in applications such as covariance sketching for streaming data [46,47], and phase space tomography under the name coherence retrieval [48,49]. Here, we wish to solve Clearly, this is equivalent to the matrix sensing problem by taking A i = a i a i . Here and throughout, we assume i.i.d. Gaussian design, a tractable model that has been extensively studied recently.

Matrix completion
Suppose we observe partial entries of a low-rank matrix M ∈ R n1×n2 of rank r, indexed by the sampling set Ω. It is convenient to introduce a projection operator P Ω : R n1×n2 → R n1×n2 such that for an input matrix M ∈ R n1×n2 , The matrix completion problem then boils down to recovering M from P Ω (M ) (or equivalently, from the partially observed entries of M ) [1,13]. This arises in numerous scenarios; for instance, in collaborative filtering, we may want to predict the preferences of all users about a collection of movies, based on partially revealed ratings from the users. Throughout this paper, we adopt the following random sampling model: Each entry is observed independently with probability 0 < p ≤ 1, i.e.
For the positive semidefinite case where M = X X , the task can be cast as solving When it comes to the more general case where M = L R , the task boils down to solving minimize L∈R n 1 ×r ,R∈R n 2 ×r One parameter that plays a crucial role in determining the feasibility of matrix completion is a certain coherence measure, defined as follows [1].
As can be easily shown [13], a low-rank matrix cannot be recovered from a highly incomplete set of entries, unless the matrix satisfies the incoherence condition with small µ. Throughout this paper, we let n := max{n 1 , n 2 } when referring to the matrix completion problem, and set κ = σ 1 (M )/σ r (M ) to be the condition number of M .

Blind deconvolution (the subspace model)
Suppose that we want to recover two objects h ∈ C K and x ∈ C N -or equivalently, the outer product M = h x * -from m bilinear measurements of the form To explain why this is called blind deconvolution, imagine we would like to recover two signals g ∈ C m and d ∈ C m from their circulant convolution [5]. In the frequency domain, the outputs can be written as y = diag(ĝ)d, whereĝ (resp.d) is the Fourier transform of g (resp. d). If we have additional knowledge that g = Ax andd = Bh lie in some known subspace characterized by A = [a 1 , · · · , a m ] * and B = [b 1 , · · · , b m ] * , then y reduces to the bilinear form (35). In this paper, we assume the following semi-random design, a common subspace model studied in the literature [5,22].
∼ N 0, 1 2 I N + iN 0, 1 2 I N , and that B ∈ C m×K is formed by the first K columns of a unitary discrete Fourier transform (DFT) matrix F ∈ C m×m obeying F F * = I m .
To solve this problem, one seeks a solution to The recovery performance typically depends on an incoherence measure crucial for blind deconvolution.
Definition 8. Let the incoherence parameter µ of h be the smallest number such that

Low-rank and sparse matrix decomposition / robust principal component analysis
Suppose we are given a matrix Γ ∈ R n1×n2 that is a superposition of a rank-r matrix M ∈ R n1×n2 and a sparse matrix S ∈ R n1×n2 : The goal is to separate M and S from the (possibly partial) entries of Γ . This problem is also known as robust principal component analysis [7,8,50], since we can think of it as recovering the low-rank factors of M when the observed entries are corrupted by sparse outliers (modeled by S ). The problem spans numerous applications in computer vision, medical imaging, and surveillance. Similar to the matrix completion problem, we assume the random sampling model (31), where Ω is the set of observed entries. In order to make the problem well-posed, we need the coherence parameter of M as defined in Definition 7 as well, which precludes M from being too spiky. In addition, it is sometimes convenient to introduce the following deterministic condition on the sparsity pattern and sparsity level of S , originally proposed by [7]. Specifically, it is assumed that the non-zero entries of S are "spread out", where there are at most a fraction α of non-zeros per row / column. Mathematically, this means: For the positive semidefinite case where M = X X with X ∈ R n×r , the task can be cast as solving The general case where M = L R can be formulated similarly by replacing XX with LR in (40) and optimizing over L ∈ R n1×r , R ∈ R n2×r and S ∈ S α .

Local refinement via gradient descent
This section contains extensive discussions of local convergence analysis of gradient descent. GD is perhaps the most basic optimization algorithm, and its practical importance cannot be overstated. Developing fundamental understanding of this algorithm sheds light on the effectiveness of many other iterative algorithms for solving nonconvex problems.
In the sequel, we will first examine what standard GD theory (cf. Lemma 1) yields for matrix factorization problems; see Section 5.1. While the resulting computational guarantees are optimal for nearly isotropic sampling operators, they become highly pessimistic for most other problems. We diagnose the cause in Section 5.2.1 and isolate an incoherence condition that is crucial to enable fast convergence of GD. Section 5.2.2 discusses how to enforce proper regularization to promote such an incoherence condition, while Section 5.3 illustrates an implicit regularization phenomenon that allows unregularized GD to converge fast as well. We emphasize that generic optimization theory alone yields overly pessimistic convergence bounds; one needs to blend computational and statistical analyses in order to understand the intriguing performance of GD.

Computational analysis via strong convexity and smoothness
To analyze local convergence of GD, a natural strategy is to resort to the standard GD theory in Lemma 1. This requires us to check whether strong convexity and smoothness hold locally, as done in Section 3.1. If so, then Lemma 1 yields an upper bound on the iteration complexity. This simple strategy works well when, for example, the sampling operator is nearly isotropic. In the sequel, we use a few examples to illustrate the applicability and potential drawback of this analysis strategy.

Measurements that satisfy the RIP (rank-1 case)
We begin with the matrix sensing problem (19) and consider the case where the truth has rank 1, i.e. M = x x for some vector x ∈ R n . This requires us to solve For notational simplicity, this subsection focuses on the symmetric case where A i = A i . The gradient update rule (4) for this problem reads When the sensing matrices [A i ] 1≤i≤m are random and isotropic, (41) can be viewed as a randomized version of the rank-1 matrix factorization problem discussed in Section 3. To see this, consider for instance the case where the A i 's are drawn from the symmetric Gaussian design, i.e. the diagonal entries of A i are i.i.d. N (0, 1) and the off-diagonal entries are i.i.d. N (0, 1/2). For any fixed x, one has which coincides with the warm-up example (15) by taking M = x x . This bodes well for fast local convergence of GD, at least at the population level (i.e. the case when the sample size m → ∞).
What happens in the finite-sample regime? It turns out that if the sensing operator satisfies the RIP (cf. Definition 6), then ∇ 2 f (·) does not deviate too much from its population-level counterpart, and hence f (·) remains locally strongly convex and smooth. This in turn allows one to invoke the standard GD theory to establish local linear convergence.
Theorem 3 (GD for matrix sensing (rank-1)). Consider the problem (41), and suppose the operator (23) satisfies 4-RIP for RIP constant δ 4 ≤ 1/44. If x 0 − x 2 ≤ x 2 /12, then GD with η t ≡ 1/(3 x 2 2 ) obeys This theorem, which is a deterministic result, is established in Appendix A. An appealing feature is that: it is possible for such RIP to hold as long as the sample size m is on the order of the information-theoretic limits (i.e. O(n)), in view of Fact 1. The take-home message is: for highly random and isotropic sampling schemes, local strong convexity and smoothness continue to hold even in the sample-limited regime.

Measurements that satisfy the RIP (rank-r case)
The rank-1 case is singled out above due to its simplicity. The result certainly goes well beyond the rank-1 case. Again, we focus on the symmetric case 3 (20) with A i = A i , in which the gradient update rule (4) satisfies At first, one might imagine that f (·) remains locally strongly convex. This is, unfortunately, not true, as demonstrated by the following example. 3 If the A i 's are asymmetric, the gradient is given by ∇f It can be shown that for any Z ∈ R n×r (see [26,29]): Think of the following example for two unit vectors u, v obeying u v = 0 and any 0 < δ < 1. It is straightforward to verify that which violates convexity. Moreover, this happens even when X is arbitrarily close to X (by taking δ → 0).
Fortunately, the above issue can be easily addressed. The key is to recognize that: one can only hope to recover X up to global orthonormal transformation, unless further constraints are imposed. Hence, a more suitable error metric is dist(X, X ) := min a counterpart of the Euclidean error when accounting for global ambiguity. For notational convenience, we let Finding H X is a classical problem called the orthogonal Procrustes problem [51]. With these metrics in mind, we are ready to generalize the standard GD theory in Lemma 1. In what follows, we assume that X is a global minimizer of f (·), and make the further homogeneity assumption ∇f (X)H = ∇f (XH) for any orthonormal matrix H ∈ O r×r -a common fact that arises in matrix factorization problems. Lemma 4. Suppose that f is β-smooth within a local ball B ζ (X ) := {X : X − X F ≤ ζ}, and that for any X ∈ B ζ (X ) and any Z, Proof of Lemma 4. For notational simplicity, let From the GD update rule, where the first inequality follows from the definition of dist(·, ·), the second line follows since ∇f (X ) = 0, the thrid line uses the assumption ∇f (X t H t ) = ∇f (X t )H t , and the last line arises from the smoothness condition. It remains to control the last term of (51). To this end, applying the fundamental theorem of calculus [32, Chapter XIII, Theorem 4.2] we get where X(τ ) : We then invoke the condition (50) to reach Putting the above results together and taking η t = α/β 2 , we arrive at Recognizing that X 0 ∈ B ζ (X ), we can complete the proof by induction.
The condition (50) is a modification of strong convexity to account for global rotation. In particular, it restricts attention to directions of the form ZH Z − X , where one first adjusts the orientation of Z to best align with the global minimizer. To confirm that such restriction is sensible, we revisit Example 1. With proper rotation, one has ZH Z = X and hence which becomes strictly positive for δ ≤ 1/3. In fact, if X is sufficiently close to X , then the condition (50) is valid for (45). Details are deferred to Appendix C.
Further, similar to the analysis for the rank-1 case, we can demonstrate that if A satisfies the RIP for some sufficiently small RIP constant, then ∇ 2 f (·) is locally not far from ∇ 2 f ∞ (·) in Example 1, meaning that the condition (50) continues to hold for some α > 0. This leads to the following result. It is assumed that the ground truth X has condition number κ.
Three implications of Theorem 4 merit particular attention: (1) the quality of the initialization depends on the least singular value of the truth, so as to ensure that the estimation error does not overwhelm any of the important signal direction; (2) the convergence rate becomes a function of the condition number κ: the better conditioned the truth is, the faster GD converges; (3) provided with a good initialization, GD converges linearly as long as the sample size m is on the order of the information-theoretic limits (i.e. O(nr)), in view of Fact 1.
Remark 3 (Asymmetric case). Our discussion continues to hold for the more general case where M = L R , although an extra regularization term has been suggested to balance the size of the two factors. Specifically, we introduce a regularized version of the loss (21) as follows with λ a regularization parameter, e.g. λ = 1/32 as suggested in [23]. 4 If one applies GD to the regularized loss: then the convergence rate for the symmetric case remains valid by replacing X (resp. X t ) with X = L R (resp. X t = Lt Rt ) in the error metric.

Measurements that do not obey the RIP
There is no shortage of important examples where the sampling operators fail to satisfy the RIP. For these cases, the standard theory in Lemma 1 (or Lemma 2) either is not directly applicable or leads to pessimistic computational guarantees. This subsection presents a few such cases. We start with phase retrieval (27), for which the gradient update rule is given by This algorithm, also dubbed as Wirtinger flow, was first investigated in [18]. The name "Wirtinger flow" stems from the fact that Wirtinger calculus is used to calculate the gradient in the complex-valued case. The associated sampling operator A (cf. (23)), unfortunately, does not satisfy the standard RIP (cf. Definition 6) unless the sample size far exceeds the statistical limit; see [44,46]. 5 We can, however, still evaluate the local strong convexity and smoothness parameters to see what computational bounds they produce. Recall that the Hessian of (27) is given by Using standard concentration inequalities for random matrices [53,54], one derives the following strong convexity and smoothness bounds [26,55,56]. 6 Lemma 5 (Local strong convexity and smoothness for phase retrieval). Consider the problem (27). There exist some constants c 0 , c 1 , c 2 > 0 such that with probability at least 1 − O(n −10 ), holds simultaneously for all x obeying x − x 2 ≤ c 1 x 2 , provided that m ≥ c 0 n log n.
This lemma says that f (·) is locally 0.5-strongly convex and c 2 n-smooth when the sample size m n log n. The sample complexity only exceeds the information-theoretic limit by a logarithmic factor. Applying Lemma 1 then reveals that: Theorem 5 (GD for phase retrieval (loose bound) [18]). Under the assumptions of Lemma 5, the GD iterates obey This is precisely the computational guarantee given in [18], albeit derived via a different argument. The above iteration complexity bound, however, is not appealing in practice: it requires O(n log 1 ε ) iterations to guarantee ε-accuracy (in a relative sense). For large-scale problems where n is very large, such an iteration complexity could be prohibitive.
Phase retrieval is certainly not the only problem where classical results in Lemma 1 yield unsatisfactory answers. The situation is even worse for other important problems like matrix completion and blind deconvolution, where strong convexity (or the modified version accounting for global ambiguity) does not hold at all unless we restrict attention to a constrained class of decision variables. All of this calls for new ideas in establishing computational guarantees that match practical performances.

Improved computational guarantees via restricted geometry and regularization
As emphasized in the preceding subsection, two issues stand out in the absence of RIP: • The smoothness condition may not be well-controlled; • Local strong convexity may fail, even if we account for global ambiguity.
We discuss how to address these issues, by first identifying a restricted region with amenable geometry for fast convergence in Section 5.2.1 and then applying regularized gradient descent to ensure the iterates stay in the restricted region in Section 5.2.2.

Restricted strong convexity and smoothness
While desired strong convexity and smoothness (or regularity conditions) may fail to hold in the entire local ball, it is possible for them to arise when we restrict ourselves to a small subset of the local ball and / or a set of special directions. Take phase retrieval for example: f is locally 0.5-strongly convex, but the smoothness parameter is exceedingly large (see Lemma 5). On closer inspection, those points x that are too aligned with any of the sampling vector {a i } incur ill-conditioned Hessians. For instance, suppose x is a unit vector independent of {a i }. Then the point x = x + δ aj aj 2 for some constant δ often results in extremely large x ∇ 2 f (x)x. 7 This simple instance suggests that: in order to ensure well-conditioned Hessians, one needs to preclude points too "coherent" with the sampling vectors, as formalized below.
Lemma 6 (Restricted smoothness for phase retrieval [26]). Under the assumptions of Lemma 5, there exist some constants c 0 , · · · , c 3 > 0 such that if m ≥ c 0 n log n, then with probability at least 1 − O(mn −10 ), In words, desired smoothness is guaranteed when considering only points sufficiently near-orthogonal to all sampling vectors. Such a near-orthogonality property will be referred to as "incoherence" between x and the sampling vectors.
Going beyond phase retrieval, the notion of incoherence is not only crucial to control smoothness, but also plays a critical role in ensuring local strong convexity (or regularity conditions). A partial list of examples include matrix completion, quadratic sensing, blind deconvolution and demixing, etc [19,21,22,26,[57][58][59].
In the sequel, we single out the matrix completion problem to illustrate this fact. The interested readers are referred to [19,21,52] for regularity conditions for matrix completion, to [57] for strong convexity and smoothness for quadratic sensing, to [26,59] (resp. [22,58]) for strong convexity and smoothness (resp. regularity condition) for blind deconvolution and demixing. 7 With high probability, (a j x) 2 = (1 − o(1)) δ 2 n, and hence the Hessian (cf. (56)) at this point x satisfies (1)) δ 4 n 2 /m, much larger than the strong convexity parameter when m n 2 .
Lemma 7 (Restricted strong convexity and smoothness for matrix completion [26]). Consider the problem (32). Suppose that n 2 p ≥ c 0 κ 2 µrn log n for some large constant c 0 > 0. Then with probability 1 − O n −10 , the Hessian obeys for all Z (with H Z defined in (49)) and all X satisfying where ≤ c 1 / κ 3 µr log 2 n for some constant c 1 > 0. 8 This lemma confines attention to the set of points obeying Given that each observed entry M i,j can be viewed as e i M e j , the sampling basis relies heavily on the standard basis vectors. As a result, the above lemma is essentially imposing conditions on the incoherence between X and the sampling basis.

Regularized gradient descent
While we have demonstrated favorable geometry within the set of local points satisfying the desired incoherence condition, the challenge remains as to how to ensure the GD iterates fall within this set. A natural strategy is to enforce proper regularization. Several auxiliary regularization procedures have been proposed to explicitly promote the incoherence constraints [16, 19-22, 58, 60, 61], in the hope of improving computational guarantees. Specifically, one can regularize the loss function, by adding additional regularization term G(·) to the objective function and designing the GD update rule w.r.t. the regularized problem with λ > 0 the regularization parameter. For example: • Matrix completion (33): the following regularized loss has been proposed [16,19,62] for some scalars α 1 , · · · , α 4 > 0. There are numerous choices of G 0 ; the one suggested by [19] is G 0 (z) = max{z − 1, 0} 2 . With suitable step size and proper initialization, GD w.r.t. the regularized loss provably yields ε-accuracy in O(poly(n) log 1 ε ) iterations, provided that the sample size n 2 p µ 2 κ 6 nr 7 log n [19].
• Blind deconvolution (36): [22,58,63] suggest the following regularized loss It has been demonstrated that under proper initialization and step size, gradient methods w.r.t. the regularized loss reach ε-accuracy in O(poly(m) log 1 ε ) iterations, provided that the sample size m (K + N ) log 2 m [22].
The second line arises since f (·) is ↵-strongly convex, and the last line uses the smoothness condition.
The second line arises since f (·) is ↵-strongly convex, and the last line uses the smoothness condition.
The second line arises since f (·) is ↵-strongly convex, and the last line uses the smoothness condition.
The second line arises since f (·) is ↵-strongly convex, and the last line uses the smoothness condition.
The second line arises since f (·) is ↵-strongly convex, and the last line uses the smoothness condition.
The second line arises since f (· convex, and the last line uses the smoothness condition.
The second line arises sin convex, and the last line uses the smoothness condition.
The second line arises since f convex, and the last line uses the smoothness condition.
The second line arises since f (·) is ↵-stron convex, and the last line uses the smoothness condition.
The second line arises since f (·) convex, and the last line uses the smoothness condition. Figure 3: The GD iterates and the locally strongly convex and smooth region (the shaded region). (Left) When this region is an 2 ball, then standard GD theory implies 2 convergence. (Right) When this region is a polytope, the implicit regularization phenomenon implies that the GD iterates still stay within this nice region.
In both cases, the regularization terms penalize, among other things, the incoherence measure between the decision variable and the corresponding sampling basis. We note, however, that the regularization terms are often found unnecessary in both theory and practice, and the theoretical guarantees derived in this line of works are also subject to improvements, as unveiled in the next subsection (cf. Section 5.3). Two other regularization approaches are also worth noting: (1) truncated gradient descent; (2) projected gradient descent. Given that they are extensions of vanilla GD and might enjoy additional benefits, we postpone the discussions to Section 6.

The phenomenon of implicit regularization
Despite the theoretical success of regularized GD, it is often observed that vanilla GD -in the absence of any regularization -converges geometrically fast in practice. One intriguing fact is this: for the problems mentioned above, GD automatically forces its iterates to stay incoherent with the sampling vectors / matrices, without any need of explicit regularization [26,57,59]. This means that with high probability, the entire GD trajectory lies within a nice region that enjoys desired strong convexity and smoothness, thus enabling fast convergence.
To illustrate this fact, we display in Fig. 3 a typical GD trajectory. The incoherence region -which enjoys local strong convexity and smoothness -is often a polytope (see the shaded region in the right panel of Fig. 3). The implicit regularization suggests that with high probability, the entire GD trajectory is constrained within this polytope, thus exhibiting linear convergence. It is worth noting that this cannot be derived from generic GD theory like Lemma 1. For instance, Lemma 1 implies that starting with a good initialization, the next iterate experiences 2 error contraction, but it falls short of enforcing the incoherence condition and hence does not preclude the iterates from leaving the polytope.
In the sequel, we start with phase retrieval as the first example: Theorem 6 (GD for phase retrieval (improved bound) [26]). Under the assumptions of Lemma 5, the GD iterates with proper initialization (see, e.g., spectral initialization in Section 8.2) and η t ≡ 1/(c 3 x 2 2 log n) obey for all t ≥ 0 with probability 1 − O(n −10 ). Here, c 3 , c 4 > 0 are some constants, and we assume In words, (64b) reveals that all iterates are incoherent w.r.t. the sampling vectors, and hence fall within the nice region characterized in Lemma 6. With this observation in mind, it is shown in (64a) that vanilla GD converges in O(log n log 1 ε ) iterations. This significantly improves upon the computational bound in Theorem 5 derived based on the smoothness property without restricting to the incoherence region.
Similarly, for quadratic sensing (29), where the GD update rule is given as we have the following result, which generalizes Theorem 6 to the low-rank setting.
Theorem 7 (GD for quadratic sensing [57]). Consider the problem (29). Suppose the sample size satisfies m ≥ c 0 nr 4 κ 3 log n for some large constant c 0 , then with probability 1 − O(mn −10 ), the GD iterates with proper initialization (see, e.g., spectral initialization in Section 8.2) and This theorem demonstrates that vanilla GD converges within O max{r, log n} 2 log 1 ε iterations for quadratic sensing of a rank-r matrix. This significantly improves upon the computational bounds in [56] which do not consider the incoherence region.
The next example is matrix completion (32), for which the GD update rule reads with P Ω defined in (30). The theory for this update rule is: Theorem 8 (GD for matrix completion [26]). Consider the problem (32). Suppose that the sample size satisfies n 2 p ≥ c 0 µ 3 r 3 n log 3 n for some large constant c 0 > 0, and that the condition number κ of M = X X is a fixed constant. With probability at least 1 − O n −3 , the GD iterates (67) with proper initialization (see, e.g., spectral initialization in Section 8.2) satisfy for all t ≥ 0, with H Xt defined in (49). Here, c 1 > 0 is some constant, and η t ≡ c 2 /(κσ 1 (M )) for some constant c 2 > 0.
This theorem demonstrates that vanilla GD converges within O log 1 ε iterations. The key enabler of such a convergence rate is the property (68b), which basically implies that the GD iterates stay incoherent with the standard basis vectors. A byproduct is that: GD converges not only in Euclidean norm, it also converges in other more refined error metrics, e.g. the one measured by the 2 / ∞ norm.
The last example is blind deconvolution. To measure the discrepancy between any z := h x and z := h x , we define which accounts for unrecoverable global scaling and phase. The gradient method, also called Wirtinger flow (WF), is which enjoys the following theoretical support.
Theorem 9 (WF for blind deconvolution [26]). Consider the problem (36). Suppose the sample size m ≥ c 0 µ 2 max{K, N }poly log m for some large constant c 0 > 0. Then there is some constant c 1 > 0 such that with probability exceeding 1 − O(min{K, N } −5 ), the iterates (70) with proper initialization (see, e.g., spectral initialization in Section 8.2) and η t ≡ c 1 satisfy For conciseness, we only state that the estimation error converges in O log 1 ε iterations. The incoherence conditions also provably hold across all iterations; see [26] for details. Similar results have been derived for the blind demixing case as well [59].
Finally, we remark that the desired incoherence conditions cannot be established via generic optimization theory. Rather, these are proved by exploiting delicate statistical properties underlying the models of interest.

Notes
Two-stage nonconvex algorithms for matrix factorization were pioneered in 2009 by Keshavan et al. [16,62], where the authors studied the spectral method followed by (regularized) gradient descent on Grassmann manifolds. Partly due to the popularity of convex programming, the gradient stage of [16,62] received less attention than convex relaxation and spectral methods around that time. A recent work that further popularized the gradient methods for matrix factorization is Candès et al. [18], which provided the first convergence guarantees for gradient descent (or Wirtinger flow) for phase retrieval. Local convergence of (regularized) GD was later established for matrix completion (without resorting to Grassmann manifolds) [19], matrix sensing [23,52], and blind deconvolution under subspace prior [22]. These works were all based on regularity conditions within a local ball. The resulting iteration complexities for phase retrieval, matrix completion, and blind deconvolution were all sub-optimal, which scaled at least linearly with the problem size. Near-optimal computational guarantees were first derived by [26] via a leave-one-out analysis. Notably, all of these works are local results and rely on proper initialization. Later on, GD was shown to converge within a logarithmic number of iterations for phase retrieval, even with random initialization [71].

Variants of gradient descent
This section introduces several variants of gradient descent that serve different purposes, including improving computational performance, enforcing additional structures of the estimates, and removing the effects of outliers, to name a few.

Projected gradient descent
Projected gradient descent modifies vanilla GD (4) by adding a projection step in order to enforce additional structures of the iterates, that is where the constraint set C can be either convex or nonconvex. For many important sets C encountered in practice, the projection step can be implemented efficiently, sometimes even with a closed-form solution.
There are two common purposes for including a projection step: 1) to enforce the iterates to stay in a region with benign geometry, whose importance has been explained in Section 5.2.1; 2) to encourage additional low-dimensional structures of the iterates that may be available from prior knowledge.

Projection for computational benefits
Here, the projection is to ensure the running iterates stay incoherent with the sampling basis, a property that is crucial to guarantee the algorithm descends properly in every iteration (see Section 5.2.1). One notable example serving this purpose is projected GD for matrix completion [21,60,61], where in the positive semidefinite case (i.e. M = X X ), one runs projected GD w.r.t. the loss function f (·) in (32): where η t is the step size and P C denote the Euclidean projection onto the set of incoherent matrices: with X 0 being the initialization and c is a predetermined constant (e.g. c = 2). This projection guarantees that the iterates stay in a nice incoherent region w.r.t. the sampling basis (similar to the one prescribed in Lemma 7), thus achieving fast convergence. Moreover, this projection can be implemented via a row-wise "clipping" operation, given as for i = 1, 2, . . . , n. The convergence guarantee for this update rule is given below, which offers slightly different prescriptions in terms of sample complexity and convergence rate from Theorem 8 using vanilla GD.
This theorem says that projected GD takes O(µr log 1 ε ) iterations to yield ε-accuracy (in a relative sense). Remark 4. The results can be extended to the more general asymmetric case by applying similar modifications mentioned in Remark 3; see [60,61].

Projection for incorporating structural priors
In many problems of practical interest, we might be given some prior knowledge about the signal of interest, encoded by a constraint set x ∈ C. Therefore, it is natural to apply projection to enforce the desired structural constraints. One such example is sparse phase retrieval [73,74], where it is known a priori that x in (26) is k-sparse, where k n. If we have prior knowledge about x 1 , then we can pick the constraint set C as follows to promote sparsity as a sparse signal often (although not always) has low 1 norm. With this convex constraint set in place, applying projected GD w.r.t. the loss function (27) can be efficiently implemented [75]. The theoretical guarantee of projected GD for sparse phase retrieval is given below.
Another possible projection constraint set for sparse phase retrieval is the (nonconvex) set of k-sparse vectors [76], This leads to a hard-thresholding operation, namely, P C (x) becomes the best k-term approximation of x (obtained by keeping the k largest entries (in magnitude) of x and setting the rest to 0). The readers are referred to [76] for details. See also [74] for a thresholded GD algorithm -which enforces adaptive thresholding rather than projection to promote sparsity -for solving the sparse phase retrieval problem. We caution, however, that Theorem 11 does not imply that the sample complexity for projected GD (or thresholded GD) is O(k log n). So far there is no tractable procedure that can provably guarantee a sufficiently good initial point x 0 when m k log n (see a discussion of the spectral initialization method in Section 8.3.3). Rather, all computationally feasible algorithms (both convex and nonconvex) analyzed so far require sample complexity at least on the order of k 2 log n under i.i.d. Gaussian designs, unless k is sufficiently large or other structural information is available [46,74,[77][78][79]. All in all, the computational bottleneck for sparse phase retrieval lies in the initialization stage.

Truncated gradient descent
Truncated gradient descent proceeds by trimming away a subset of the measurements when forming the descent direction, typically performed adaptively. We can express it as where T is an operator that effectively drops samples that bear undesirable influences over the search directions.
There are two common purposes for enforcing a truncation step: (1) to remove samples whose associated design vectors are too coherent with the current iterate [20,80,81], in order to accelerate convergence and improve sample complexity; (2) to remove samples that may be adversarial outliers, in the hope of improving robustness of the algorithm [24,61,82].

Truncation for computational and statistical benefits
We use phase retrieval to illustrate this benefit. All results discussed so far require a sample size that exceeds m n log n. When it comes to the sample-limited regime where m n, there is no guarantee for strong convexity (or regularity condition) to hold. This presents significant challenges for nonconvex methods, in a regime of critical importance for practitioners.
To better understand the challenge, recall the GD rule (55). When m is exceedingly large, the negative gradient concentrates around the population-level gradient, which forms a reliable search direction. However, when m n, the gradient -which depends on 4th moments of {a i } and is heavy-tailed -may deviate significantly from the mean, thus resulting in unstable search directions.
To stabilize the search directions, one strategy is to trim away those gradient components {∇f i (x t ) := (a i x t ) 2 − y i a i a i x t } whose size deviate too much from the typical size. Specifically, the truncation rule proposed in [20] is: 9 for some trimming criteria defined as where α lb , α ub , α h are predetermined thresholds. This trimming rule -called Truncated Wirtinger flow -effectively removes the "heavy tails", thus leading to much better concentration and hence enhanced performance.
Remark 5. In this case, the truncated gradient is clearly not smooth, and hence we need to resort to the regularity condition (see Definition 3) with g(x) = ∇f tr (x). Specifically, the proof of Theorem 12 consists of for all x within a local ball around x , where λ, µ 1. See [20] for details.
In comparison to vanilla GD, the truncated version provably achieves two benefits: • Optimal sample complexity: given that one needs at least n samples to recover n unknowns, the sample complexity m n is orderwise optimal; • Optimal computational complexity: truncated WF yields ε accuracy in O log 1 ε iterations. Since each iteration takes time proportional to that taken to read the data, the computational complexity is nearly optimal.
At the same time, this approach is particularly stable in the presence of noise, which enjoys a stability bound that is minimax optimal. The readers are referred to [20] for precise statements.

Truncation for removing sparse outliers
In many problems, the collected measurements may suffer from corruptions of sparse outliers, and the gradient descent iterates need to be carefully monitored to remove the undesired effects of outliers (which may take arbitrary values). Take robust PCA (40) as an example, in which a fraction α of revealed entries are corrupted by outliers. At the tth iterate, one can first try to identify the support of the sparse matrix S by hard thresholding the residual, namely, Here, c > 1 is some predetermined constant (e.g. c = 3), and the operator H l (·) is defined as ·,k ) denotes the lth largest entry (in magnitude) in the jth row (resp. column) of A. The idea is simple: an entry is likely to be an outlier if it is simultaneously among the largest entries in the corresponding row and column. The thresholded residual S t+1 then becomes our estimate of the sparse outlier matrix S in the (t + 1)-th iteration. With this in place, we update the estimate for the low-rank factor by applying projected GD where C is the same as (74) to enforce the incoherence condition. This method has the following theoretical guarantee: Theorem 13 (Nonconvex robust PCA [61]). Assume that the condition number κ of M = X X is a fixed constant. Suppose that the sample size and the sparsity of the outlier satisfy n 2 p ≥ c 0 µ 2 r 2 n log n and α ≤ c 1 /(µr) for some constants c 0 , c 1 > 0. With probability at least 1 − O n −1 , the iterates satisfy for all t ≥ 0, provided that dist 2 (X 0 , X ) ≤ c 3 σ r (M ). Here, 0 < c 2 , c 3 < 1 are some constants, and η t ≡ c 4 /(µrσ 1 (M )) for some constant c 4 > 0.
Remark 6. In the full data case, the convergence rate can be improved to 1 − c 2 for some constant 0 < c 2 < 1.
This theorem essentially says that: as long as the fraction of entries corrupted by outliers does not exceed O(1/µr), then the nonconvex algorithm described above provably recovers the true low-rank matrix in about O(µr) iterations (up to some logarithmic factor). When r = O(1), it means that the nonconvex algorithm succeeds even when a constant fraction of entries are corrupted.
Another truncation strategy to remove outliers is based on the sample median, as the median is known to be robust against arbitrary outliers [24,82]. We illustrate this median-truncation approach through an example of robust phase retrieval [24], where we assume a subset of samples in (26) is corrupted arbitrarily, with their index set denoted by S with |S| = αm. Mathematically, the measurement model in the presence of outliers is given by The goal is to still recover x in the presence of many outliers (e.g. a constant fraction of measurements are outliers). It is obvious that the original GD iterates (55) are not robust, since the residual can be perturbed arbitrarily if i ∈ S. Hence, we instead include only a subset of the samples when forming the search direction, yielding a truncated GD update rule Here, T t only includes samples whose residual size |r t,i | does not deviate much from the median of {|r t,j |} 1≤j≤m : where median(·) denotes the sample median. As the iterates get close to the ground truth, we expect that the residuals of the clean samples will decrease and cluster, while the residuals remain large for outliers. In this situation, the median provides a robust means to tell them apart. One has the following theory, which reveals the success of the median-truncated GD even when a constant fraction of measurements are arbitrarily corrupted.

Generalized gradient descent
In all the examples discussed so far, the loss function f (·) has been a smooth function. When f (·) is nonsmooth and non-differentiable, it is possible to continue to apply GD using the generalized gradient (e.g. subgradient) [83]. As an example, consider again the phase retrieval problem but with an alternative loss function, where we minimize the quadratic loss of the amplitude-based measurements, given as Clearly, f amp (x) is nonsmooth, and its generalized gradient is given by, with a slight abuse of notation, We can simply execute GD w.r.t. the generalized gradient: This amplitude-based loss function f amp (·) often has better curvature around the truth, compared to the intensity-based loss function f (·) defined in (29); see [81,84,85] for detailed discussions. The theory is given as follows.
In comparison to Theorem 12, the generalized GD w.r.t. f amp (·) achieves both order-optimal sample and computational complexities. Notably, a very similar theory was obtained in [81] for a truncated version of the generalized GD (called Truncated Amplitude Flow therein), where the algorithm also employs the gradient update w.r.t. f amp (·) but discards high-leverage data in a way similar to truncated GD discussed in Section 6.2. However, in contrast to the intensity-based loss f (·) defined in (29), the truncation step is not crucial and can be safely removed when dealing with the amplitude-based f amp (·). A main reason is that for any fixed x, f amp (·) involves only the first and second moments of the (sub)-Gaussian random variables {a i x}. As such, it exhibits much sharper measure concentration -and hence much better controlled gradient componentscompared to the heavy-tailed f (·), which involves fourth moments of {a i x}. This observation in turn implies the importance of designing loss functions for nonconvex statistical estimation.

Projected power method for constrained PCA
Many applications require solving a constrained quadratic maximization (or constrained PCA) problem: subject to x ∈ C, where C encodes the set of feasible points. This problem becomes nonconvex if either L is not negative semidefinite or if C is nonconvex. To demonstrate the value of studying this problem, we introduce two important examples.
• Phase synchronization [86,87]. Suppose we wish to recover n unknown phases φ 1 , · · · , φ n ∈ [0, 2π] given their pairwise relative phases. Alternatively, by setting (x ) i = e φi , this problem reduces to estimating x = [(x ) i ] 1≤i≤n from x x * -a matrix that encodes all pairwise phase differences (x ) i (x ) * j = e φi−φj . To account for the noisy nature of practical measurements, suppose that what we observe is L = x x * + σW , where W is a Hermitian matrix. Here, {W i,j } i≤j are i.i.d. standard complex Gaussians. The quantity σ indicates the noise level, which determines the hardness of the problem. A natural way to attempt recovery is to solve the following problem maximize x x * Lx subject to |x i | = 1, 1 ≤ i ≤ n.
• Joint alignment [10,88]. Imagine we want to estimate n discrete variables {(x ) i } 1≤i≤n , where each variable can take m possible values, namely, (x ) i ∈ {1, · · · , m}. Suppose that estimation needs to be performed based on pairwise difference samples y i,j = x i − x j + z i,j mod m, where the z i,j 's are i.i.d. noise and their distributions dictate the recovery limits. To facilitate computation, one strategy is to lift each discrete variable x i into a m-dimensional vector x i ∈ {e 1 , · · · , e m }. We then introduce a matrix L that properly encodes all log-likelihood information. After simple manipulation (see [10] for details), maximum likelihood estimation can be cast as follows More examples of constrained PCA include an alternative formulation of phase retrieval [89], sparse PCA [90], and multi-channel blind deconvolution with sparsity priors [91].
To solve (92), two algorithms naturally come into mind. The first one is projected GD, which follows the update rule Another possibility is called the projected power method (PPM) [10,90], which drops the current iterate and performs projection only over the gradient component: While this is perhaps best motivated by its connection to the canonical eigenvector problem (which is often solved by the power method), we remark on its close resemblance to projected GD. In fact, for many constrained sets C (e.g. the ones in phase synchronization and joint alignment), (93) is equivalent to projected GD when the step size η t → ∞.
As it turns out, the PPM provably achieves near-optimal sample and computational complexities for the preceding two examples. Due to the space limitation, the theory is described only in passing.
• Phase synchronization. With high probability, the PPM with proper initialization converges linearly to the global optimum, as long as the noise level σ n/ log n. This is information theoretically optimal up to some log factor [67,86].
• Joint alignment. With high probability, the PPM with proper initialization converges linearly to the ground truth, as long as certain Kullback-Leibler divergence w.r.t. the noise distribution exceeds the information-theoretic threshold. See details in [10].

Gradient descent on manifolds
In many problems of interest, it is desirable to impose additional constraints on the object of interest, which leads to a constrained optimization problem over manifolds. In the context of low-rank matrix factorization, to eliminate global scaling ambiguity, one might constrain the low-rank factors to live on a Grassmann manifold or a Riemannian quotient manifold [92,93]. To fix ideas, take matrix completion as an example. When factorizing M = L R , we might assume L ∈ G(n 1 , r), where G(n 1 , r) denotes the Grassmann manifold which parametrizes all r-dimensional linear subspaces of the n 1 -dimensional space 10 . In words, we are searching for a r-dimensional subspace L but ignores the global rotation. It is also assumed that L L = I r to remove the global scaling ambiguity (otherwise (cL, c −1 R) is always equivalent to (L, R) for any c = 0). One might then try to minimize the loss function defined over the Grassmann manifold as follows where F (L) := min R∈R n 2 ×r 10 More specifically, any point in G(n, r) is an equivalent class of a n × r orthonormal matrix. See [92] for details.
As it turns out, it is possible to apply GD to F (·) over the Grassmann manifold by moving along the geodesics; here, a geodesic is the shortest path between two points on a manifold. See [92] for an excellent overview. In what follows, we provide a very brief exposure to highlight its difference from a nominal gradient descent in the Euclidean space.
We start by writing out the conventional gradient of F (·) w.r.t. the tth iterate L t in the Euclidean space [94]: F is the least-squares solution. The gradient on the Grassmann manifold, denoted by ∇ G F (·), is then given by Let −∇ G F (L t ) =Ũ tΣtṼ t be its compact SVD, then the geodesic on the Grassmann manifold along the direction −∇ G F (L t ) is given by We can then update the iterates as for some properly chosen step size η t . For the rank-1 case where r = 1, the update rule (97) can be simplified to with σ := ∇ G F (L t ) 2 . As can be verified, L t+1 automatically stays on the unit sphere obeying L t+1 L t+1 = 1.
One of the earliest provable nonconvex methods for matrix completion -the OptSpace algorithm by Keshavan et al. [16,62] -performs gradient descent on the Grassmann manifold, tailored to the loss function: where L ∈ G(n 1 , r) and R ∈ G(n 2 , r), with some additional regularization terms to promote incoherence (see Section 5.2.2). It is shown by [16,62] that GD on the Grassman manifold converges to the truth with high probability if n 2 p µ 2 κ 6 r 2 n log n, provided that a proper initialization is given.

Stochastic gradient descent
Many problems have to deal with an empirical loss function that is an average of the sample losses, namely, When the sample size is large, it is computational expensive to apply the gradient update rule -which goes through all data samples -in every iteration. Instead, one might apply stochastic gradient descent (SGD) [101][102][103], where in each iteration, only a single sample or a small subset of samples are used to form the search direction. Specifically, the SGD follows the update rule where Ω t ∈ {1, . . . , m} is a subset of cardinality k selected uniformly at random. Here, k ≥ 1 is known as the mini-batch size. As one can expect, the mini-batch size k plays an important role in the trade-off between the computational cost per iteration and the convergence rate. A properly chosen mini-batch size will optimize the total computational cost given the practical constraints. Please see [84,[104][105][106][107] for the application of SGD in phase retrieval (which has an interesting connection with the Kaczmarz method), and [108] for its application in matrix factorization.

Beyond gradient descent
Gradient descent is certainly not the only method that can be employed to solve the problem (1). Indeed, many other algorithms have been proposed, which come with different levels of theoretical guarantees. Due to the space limitation, this section only reviews two popular alternatives to gradient methods discussed so far. For simplicity, we consider the following unconstrained problem (with slight abuse of notation) minimize L∈R n 1 ×r , R∈R n 2 ×r f (L, R).

Alternating minimization
To optimize the core problem (102), alternating minimization (AltMin) alternates between solving the following two subproblems: for t = 1, 2 . . . , where R t and L t are updated sequentially. Here, L 0 is an appropriate initialization. For many problems discussed here, both (103a) and (103b) are convex problems and can be solved efficiently.

Matrix sensing
Consider the loss function (21). In each iteration, AltMin proceeds as follows [17]: for t = 1, 2 . . . , Each substep consists of a linear least-squares problem, which can often be solved efficiently via the conjugate gradient algorithm [109]. To illustrate why this forms a promising scheme, we look at the following simple example.

Example 2.
Consider the case where A is identity (i.e. A(M ) = M ). We claim that given almost any initialization, AltMin converges to the truth after two updates. To see this, we first note that the output of the first iteration can be written as R 1 = R L L 0 (L 0 L 0 ) −1 . As long as both L L 0 and L 0 L 0 are full-rank, the column space of R matches perfectly with that of R 1 . Armed with this fact, the subsequent least squares problem (i.e. the update for L 1 ) is exact, in the sense that With the above identity example in mind, we are hopeful that AltMin converges fast if A is nearly isometric. Towards this, one has the following theory.
In comparison to the performance of GD in Theorem 4, AltMin enjoys a better iteration complexity w.r.t. the condition number κ; that is, it obtains ε-accuracy within O(log(1/ε)) iterations, compared to O(κ log(1/ε)) iterations for GD. In addition, the requirement on the RIP constant depends quadratically on κ, leading to a sub-optimal sample complexity. To address this issue, Jain et al. [17] further developed a stage-wise AltMin algorithm, which only requires δ 2r = O(1/r 2 ). Intuitively, if there is a singular value that is much larger than the remaining ones, then one can treat M as a (noisy) rank-1 matrix and compute this rank-1 component via AltMin. Following this strategy, one successively applies AltMin to recover the dominant rank-1 component in the residual matrix, unless it is already well-conditioned. See [17] for details.

Phase retrieval
Consider the phase retrieval problem. It is helpful to think of the amplitude measurements as bilinear measurements of the signs b = {b i ∈ {±1}} 1≤i≤m and the signal x , namely, This leads to a simple yet useful alternative formulation for the amplitude loss minimization problem where we abuse the notation by letting Therefore, by applying AltMin to the loss function f (b, x), we obtain the following update rule [110,111]: for each t = 1, 2, . . .
where x 0 is an appropriate initial estimate, A † is the pseudo-inverse of A := [a 1 , · · · , a m ] , and √ y := [ √ y i ] 1≤i≤m . The step (106b) can again be efficiently solved using the conjugate gradient method [109]. This is exactly the Error Reduction (ER) algorithm proposed by Gerchberg and Saxton [43,112] in the 1970s. Given a reasonably good initialization, this algorithm converges linearly under the Gaussian design.
Remark 7. AltMin for phase retrieval was first analyzed by [110] but for a sample-splitting variant; that is, each iteration employs fresh samples, which facilitates analysis but is not the version used in practice. The theoretical guarantee for the original sample-reuse version was derived by [111].
In view of Theorem 17, alternating minimization, if carefully initialized, achieves optimal sample and computational complexities (up to some logarithmic factor) all at once. This in turn explains its appealing performance in practice.
where P Ω is defined in (30). Despite its popularity in practice [113], a clean analysis of the above update rule is still missing to date. Several modifications have been proposed and analyzed in the literature, primarily to bypass mathematical difficulty: • Sample splitting. Instead of reusing the same set of samples across all iterations, this approach draws a fresh set of samples at every iteration and performs AltMin on the new samples [17,[114][115][116][117]: where Ω t denotes the sampling set used in the tth iteration, which is assumed to be statistically independent across iterations. It is proven in [17] that under an appropriate initialization, the output satisfies M − L t R t F ≤ ε after t log( M F /ε) iterations, provided that the sample complexity exceeds n 2 p µ 4 κ 6 nr 7 log n log(r M F /ε). Such a sample-splitting operation ensures statistical independence across iterations, which helps to control the incoherence of the iterates. However, this necessarily results in undesirable dependency between the sample complexity and the target accuracy; for example, an infinite number of samples is needed if the goal is to achieve exact recovery.
• Regularization. Another strategy is to apply AltMin to the regularized loss function in (63) [19]: In [19], it is shown that the AltMin without resampling converges to M , with the proviso that the sample complexity exceeds n 2 p µ 2 κ 6 nr 7 log n. Note that the subproblems (109a) and (109b) do not have closed-form solutions. For properly chosen regularization functions, they might be solved using convex optimization algorithms.
It remains an open problem to establish theoretical guarantees for the original form of AltMin (108). Meanwhile, the existing sample complexity guarantees are quite sub-optimal in terms of the dependency on r and κ, and should not be taken as an indicator of the actual performance of AltMin.

Singular value projection
Another popular approach to solve (102) is singular value projection (SVP) [72,118,119]. In contrast to the algorithms discussed so far, SVP performs gradient descent in the full matrix space and then applies a partial singular value decomposition (SVD) to retain the low-rank structure. Specifically, it adopts the update rule Here, η t is the step size, and P r (Z) returns the best rank-r approximation of Z. Given that the iterates M t are always low-rank, one can store M t in a memory-efficient manner by storing its compact SVD.
The SVP algorithm is a popular approach for matrix sensing and matrix completion, where the partial SVD can be calculated using Krylov subspace methods (e.g. Lanczos algorithm) [109] or the randomized linear algebra algorithms [120]. The following theorem establishes performance guarantees for SVP.
Theorem 19 (SVP for matrix completion [72]). Consider the problem (32) and set M 0 = 0. Suppose that n 2 p ≥ c 0 κ 6 µ 4 r 6 n log n for some large constant c 0 > 0, and that the step size is set as η t ≡ 1. Then with probability exceeding 1 − O n −10 , the SVP iterates achieve as long as t ≥ c 1 log( M /ε) for some constant c 1 > 0.
Theorem 18 and Theorem 19 indicate that SVP converges linearly as soon as the sample size is sufficiently large.

Further pointers to other algorithms
A few other nonconvex matrix factorization algorithms have been left out due to space, including but not limited to normalized iterative hard thresholding (NIHT) [121], atomic decomposition for minimum rank approximation (Admira) [122], approximate message passing [123][124][125], block coordinate gradient descent [19], coordinate descent [126], and conjugate gradient [127]. The readers are referred to these papers for detailed descriptions.

Initialization via spectral methods
The theoretical performance guarantees presented in the last three sections rely heavily on proper initialization. One popular scheme that often generates a reasonably good initial estimate is called the spectral method. Informally, this strategy starts by arranging the data samples into a matrix Y of the form where Y represents certain large-sample limit whose eigenspace / singular subspaces reveal the truth, and ∆ captures the fluctuation due to the finite-sample effect. One then attempts to estimate the truth by computing the eigenspace / singular subspace of Y , provided that the finite-sample fluctuation is well-controlled. This simple strategy has proven to be quite powerful and versatile in providing a "warm start" for many nonconvex matrix factorization algorithms.

Preliminaries: matrix perturbation theory
Understanding the performance of the spectral method requires some elementary toolkits regarding eigenspace / singular subspace perturbations, which we review in this subsection.
To begin with, let Y ∈ R n×n be a symmetric matrix, whose eigenvalues are real-valued. In many cases, we only have access to a perturbed version Y of Y (cf. (111)), where the perturbation ∆ is a "small" symmetric matrix. How do the eigenvectors of Y change as a result of such a perturbation?
As it turns out, the eigenspace of Y is a stable estimate of the eigenspace of Y , with the proviso that the perturbation is sufficiently small in size. This was first established in the celebrated Davis-Kahan sin Θ Theorem [128]. Specifically, let the eigenvalues of Y be partitioned into two groups where 1 ≤ r < n. We assume that the eigen-gap between the two groups, λ r (Y ) − λ r+1 (Y ), is strictly positive. For example, if Y 0 and has rank r, then all the eigenvalues in the second group are identically zero.
Suppose we wish to estimate the eigenspace associated with the first group. Denote by U ∈ R n×r (resp. U ∈ R n×r ) an orthonormal matrix whose columns are the first r eigenvectors of Y (resp. Y ). In order to measure the distance between the two subspaces spanned by U and U , we introduce the following metric that accounts for global orthonormal transformation This metric is closely related to the dist(·, ·) metric introduced in (48), in the sense that Theorem 20 (Davis-Kahan sin Θ Theorem [128]).
Remark 8. The bound we present in (114) is in fact a simplified, but slightly more user-friendly version of the original Davis-Kahan inequality. A more general result states that, if λ r (Y ) − λ r+1 (Y ) > 0, then These results are referred to as the sin Θ theorem because the distance metric dist p (U , U ) is identical to max 1≤i≤r | sin θ i |, where {θ i } 1≤i≤r are the so-called principal angles [129] between the two subspaces spanned by U and U , respectively.
Furthermore, to deal with asymmetric matrices, similar perturbation bounds can be obtained. Suppose that Y , Y , ∆ ∈ R n1×n2 in (111). Let U (resp. U ) and V (resp. V ) consist respectively of the first r left and right singular vectors of Y (resp. Y ). Then we have the celebrated Wedin sin Θ Theorem [130] concerning perturbed singular subspaces.
In addition, one might naturally wonder how the eigenvalues / singular values are affected by the perturbation. To this end, Weyl's inequality provides a simple answer: In summary, both eigenspace (resp. singular subspace) perturbation and eigenvalue (resp. singular value) perturbation rely heavily on the spectral norm of the perturbation ∆.

Spectral methods
With the matrix perturbation theory in place, we are positioned to present spectral methods for various low-rank matrix factorization problems. As we shall see, these methods are all variations on a common recipe.

Matrix sensing
We start with the prototypical problem of matrix sensing (19) as described in Section 4.1. Let us construct a surrogate matrix as follows where A is the linear operator defined in (23) and A * denotes its adjoint. The generic version of the spectral method then proceeds by computing (i) two matrices U ∈ R n1×r and V ∈ R n2×r whose columns consist of the top-r left and right singular vectors of Y , respectively, and (ii) a diagonal matrix Σ ∈ R r×r that contains the corresponding top-r singular values. In the hope that U , V and Σ are reasonably reliable estimates of U , V and Σ , respectively, we take as estimates of the low-rank factors L and R in (22). If M ∈ R n×n is known to be positive semidefinite, then we can also let U ∈ R n×r be a matrix consisting of the top-r leading eigenvectors, with Σ being a diagonal matrix containing all top-r eigenvalues. Why would this be a good strategy? In view of Section 8.1, the three matrices U , V and Σ become reliable estimates if Y − M can be well-controlled. A simple way to control Y − M arises when A satisfies the RIP in Definition 6.
Lemma 8. Suppose that M is a rank-r matrix, and assume that A satisfies 2r-RIP with RIP constant Proof of Lemma 8. Let x, y be two arbitrary vectors, then where the last inequality is due to Lemma 3. Using a variational characterization of · , we have from which (120) follows.
In what follows, we first illustrate how to control the estimation error in the rank-1 case where M = λ u u 0. In this case, the leading eigenvector u of Y obeys where (i) comes from Theorem 20, and (ii) holds if Y − M ≤ σ 1 (M )/2 (which is guaranteed if δ 2 ≤ 1/2 according to Lemma 8). Similarly, we can invoke Weyl's inequality and Lemma 8 to control the gap between the leading eigenvalue λ of Y and λ : Combining the preceding two bounds, we see that: if u u ≥ 0, then where the last inequality makes use of (121), (122), and (113). This characterizes the difference between our estimate √ λu and the true low-rank factor λ u . Moving beyond this simple rank-1 case, a more general (and often tighter) bound can be obtained by using a refined argument from [23,Lemma 5.14]. We present the theory below. The proof can be found in Appendix D.
Theorem 22 (Spectral method for matrix sensing [23]). Fix ζ > 0. Suppose A satisfies 2r-RIP with RIP constant δ 2r < c 0 √ ζ/( √ rκ) for some sufficiently small constant c 0 > 0. Then the spectral estimate (119) obeys Remark 9. It is worth pointing out that in view of Fact 1, the vanilla spectral method needs O(nr 2 ) samples to land in the local basin of attraction (in which linear convergence of GD is guaranteed according to Theorem 4).
As discussed in Section 5, the RIP does not hold for the sensing matrices used in many problems. Nevertheless, one may still be able to show that the leading singular subspace of the surrogate matrix Y contains useful information about the truth M . In the sequel, we will go over several examples to demonstrate this point.

Phase retrieval
Recall that the phase retrieval problem in Section 4.2 can be viewed as a matrix sensing problem, where we seek to recover a rank-1 matrix M = x x with sensing matrices A i = a i a i . To obtain an initial guess x 0 that is close to the truth x , we follow the recipe described in (118) by estimating the leading eigenvector u and leading eigenvalue λ of a surrogate matrix The initial guess is then formed as 11 Unfortunately, the RIP does not hold for the sensing operator in phase retrieval, which precludes us from invoking Theorem 22. There is, however, a simple and intuitive explanation regarding why x 0 is a reasonably good estimate of x . Under the Gaussian design, the surrogate matrix Y in (123) can be viewed as the sample average of m i.i.d. random rank-one matrices y i a i a i 1≤i≤m . When the number of samples m is large, this sample average should be "close" to its expectation, which is, The best rank-1 approximation of E[Y ] is precisely 3x x . Now that Y is an approximated version of E[Y ], we expect x 0 in (124) to carry useful information about x . The above intuitive arguments can be made precise. Applying standard matrix concentration inequalities [54] to the surrogate matrix in (123) and invoking the Davis-Kahan sin Θ theorem, one arrives at the following estimates: Theorem 23 (Spectral method for phase retrieval [18]). Consider phase retrieval in Section 4.2, where x ∈ R n is any given vector. Fix any ζ > 0, and suppose m ≥ c 0 n log n for some sufficiently large constant c 0 > 0. Then the spectral estimate (124) obeys with probability at least 1 − O(n −2 ).

Quadratic sensing
An argument similar to phase retrieval can be applied to quadratic sensing in (29), recognizing that the expectation of the surrogate matrix Y in (123) now becomes The spectral method then proceeds by computing U (which consists of the top-r eigenvectors of Y ), and a diagonal matrix Σ whose ith diagonal value is given as (λ i (Y ) − σ)/2, where σ = 1 m m i=1 y i serves as an 11 In the sample-limited regime with m n, one should replace λ/3 in (124) by m i=1 y i /m. The latter provides a more accurate estimate. See the discussions in Section 8.3.1.
estimate of X 2 F . In words, the diagonal entries of Σ can be approximately viewed as the top-r eigenvalues of 1 2 Y − X 2 F I n . The initial guess is then set as for estimating the low-rank factor X . The theory is as follows.

Blind deconvolution
The blind deconvolution problem introduced in Section 4.4 has a similar mathematical structure to that of phase retrieval. Recall the sensing model in (35). Instead of reconstructing a symmetric rank-1 matrix, we now aim to recover an asymmetric rank-1 matrix h x * with sensing matrices Let u, v, and σ denote the leading left singular vector, right singular vector, and singular value, respectively. The initial guess is then formed as This estimate provably reveals sufficient information about the truth, provided that the sample size m is sufficiently large.

Matrix completion
Turning to matrix completion as introduced in Section 4.3, which is another instance of matrix sensing with sensing matrices taking the form of Then the measurements obey A i,j , M = 1 √ p (M ) i,j . Following the aforementioned procedure, we can form a surrogate matrix as Notably, the scaling factor in (129) is chosen to ensure that E[Y ] = M . We then construct the initial guess for the low-rank factors L 0 and R 0 in the same manner as (119), using Y in (130). As A i,j 1 {(i,j)∈Ω} is a collection of independent random matrices, we can use the matrix Bernstein inequality [54] to get a high-probability upper bound on the deviation Y − E[Y ] . This in turn allows us to apply the matrix perturbation bounds to control the accuracy of the spectral method.
Theorem 26 (Spectral method for matrix completion [19,21,26]). Consider matrix completion in Section 4.3. Fix ζ > 0, and suppose the condition number κ of M is a fixed constant. There exist a constant c 0 > 0 such that if p > c 0 µ 2 r 2 log n/n, then with probability at least 1 − O(n −10 ), the spectral estimate (119) obeys

Variants of spectral methods
We illustrate modifications to the spectral method, which are often found necessary to further enhance sample efficiency, increase robustness to outliers, and incorporate signal priors.

Truncated spectral method for sample efficiency
The generic recipe for spectral methods described above works well when one has sufficient samples compared to the underlying signal dimension. It might not be effective though if the sample complexity is on the order of the information-theoretic limit.
In what follows, we use phase retrieval to demonstrate the underlying issues and how to address them 12 . Recall that Theorem 23 requires a sample complexity m n log n, which is a logarithmic factor larger than the signal dimension n. What happens if we only have access to m n samples, which is the information-theoretic limit (order-wise) for phase retrieval? In this more challenging regime, it turns out that we have to modify the standard recipe by applying appropriate preprocessing before forming the surrogate matrix Y in (118).
We start by explaining why the surrogate matrix (123) for phase retrieval must be suboptimal in terms of sample complexity. For all 1 ≤ j ≤ m, In particular, taking j = i * = arg max 1≤i≤m y i gives us Under the Gaussian design, y i / x 2 2 is a collection of i.i.d. χ 2 random variables with 1 degree of freedom. It follows from well-known estimates in extreme value theory [132] that Meanwhile, a i * 2 2 ≈ n for n sufficiently large. It follows from (131) that Recall that E[Y ] = 2x x + x 2 2 I n has a bounded spectral norm, then (132) implies that, to keep the deviation between Y and E[Y ] well-controlled, we must at least have (n log m)/m 1.
This condition, however, cannot be satisfied when we have linear sample complexity m n. This explains why we need a sample complexity m n log n in Theorem 23.
The above analysis also suggests an easy fix: since the main culprit lies in the fact that max i y i is unbounded (as m → ∞), we can apply a preprocessing function T (·) to y i to keep the quantity bounded. Indeed, this is the key idea behind the truncated spectral method proposed by Chen and Candès [20], in which the surrogate matrix is modified as where for some predetermined truncation threshold γ. The initial point x 0 is then formed by scaling the leading eigenvector of Y T to have roughly the same norm of x , which can be estimated by σ = 1 m m i=1 y i . This is essentially performing a trimming operation, removing any entry of y i that bears too much influence on the leading eigenvector. The trimming step turns out to be very effective, allowing one to achieve order-wise optimal sample complexity.
Theorem 27 (Truncated spectral method for phase retrieval [20]). Consider phase retrieval in Section 4.2. Fix any ζ > 0, and suppose m ≥ c 0 n for some sufficiently large constant c 0 > 0. Then the truncated spectral estimate obeys Subsequently, several different designs of the preprocessing function have been proposed in the literature. One example was given in [81], where where γ is the (cm)-largest value (e.g. c = 1/6) in {y j } 1≤j≤m . In words, this method only employs a subset of design vectors that are better aligned with the truth x . By properly tuning the parameter c, this truncation scheme performs competitively as the scheme in (134).

Truncated spectral method for removing sparse outliers
When the samples are susceptible to adversarial entries, e.g. in the robust phase retrieval problem (85), the spectral method might not work properly even with the presence of a single outlier whose magnitude can be arbitrarily large to perturb the leading eigenvector of Y . To mitigate this issue, a median-truncation scheme was proposed in [24,82], where for some predetermined constant γ > 0. By including only a subset of samples whose values are not excessively large compared with the sample median of the samples, the preprocessing function in (136) makes the spectral method more robust against sparse and large outliers.
Theorem 28 (Median-truncated spectral method for robust phase retrieval [24]). Consider the robust phase retrieval problem in (85), and fix any ζ > 0. There exist some constants c 0 , c 1 > 0 such that if m ≥ c 0 n and α ≤ c 1 , then the median-truncated spectral estimate obeys The idea of applying truncation to form a spectral estimate is also used in the robust PCA problem (see Section 4.5). Since the observations are also potentially corrupted by large but sparse outliers, it is useful to first clean up the observations (Γ ) i,j = (M ) i,j + (S ) i,j before constructing the surrogate matrix as in (130). Indeed, this is the strategy proposed in [61]. We start by forming an estimate of the sparse outliers via the hard-thresholding operation H l (·) defined in (82), as where c > 0 is some predetermined constant (e.g. c = 3) and P Ω is defined in (30). Armed with this estimate, we form the surrogate matrix as One can then apply the spectral method to Y in (138) (similar to the matrix completion case). This approach enjoys the following performance guarantee.
Theorem 29 (Spectral method for robust PCA [61]). Suppose that the condition number κ of M = L R is a fixed constant. Fix ζ > 0. If the sample size and the sparsity fraction satisfy n 2 p ≥ c 0 µr 2 n log n and α ≤ c 1 /(µr 3/2 ) for some large constant c 0 , c 1 > 0, then with probability at least 1 − O n −1 ,

Spectral method for sparse phase retrieval
Last but not least, we briefly discuss how the spectral method can be modified to incorporate structural priors. As before, we use the example of sparse phase retrieval to illustrate the strategy, where we assume x is k-sparse (see Section 6.1.2). A simple idea is to first identify the support of x , and then try to estimate the nonzero values by applying the spectral method over the submatrix of Y on the estimated support. Towards this end, we recall that E[Y ] = 2x x + x 2 2 I n , and therefore the larger ones of its diagonal entries are more likely to be included in the support. In light of this, a simple thresholding strategy adopted in [74] is to compare Y i,i against some preset threshold γ:Ŝ The nonzero part of x is then found by applying the spectral method outlined in Section 8.2.2 to YŜ = 1 m m i=1 y i a i,Ŝ a i,Ŝ , where a i,Ŝ is the subvector of a i coming from the supportŜ. In short, this strategy provably leads to a reasonably good initial estimate, as long as m k 2 log n; the complete theory can be found in [74]. See also [76] for a more involved approach, which provides better empirical performance.

Precise asymptotic characterization and phase transitions for phase retrieval
The Davis-Kahan and Wedin sin Θ Theorems are broadly applicable and convenient to use, but they usually fall short of providing the tightest estimates. For many problems, if one examines the underlying statistical models carefully, it is often possible to obtain much more precise performance guarantees for spectral methods.
In [133], Lu and Li provided an asymptotically exact characterization of spectral initialization in the context of generalized linear regression, which subsumes phase retrieval as a special case. One way to quantify the quality of this eigenvector is via the squared cosine similarity which measures the (squared) correlation between the truth x and the initialization x 0 . The result is this: Theorem 30 (Precise asymptotic characterization [133]). Consider phase retrieval in Section 4.2, and let m/n = α for some constant α > 0. Under mild technical conditions on the preprocessing function T , the leading eigenvector as n → ∞. Here, α c > 0 is a fixed constant and ρ * (·) is a fixed function that is positive when α > α c . Furthermore, λ 1 (Y T ) − λ 2 (Y T ) converges to a positive constant iff α > α c .
Remark 10. The above characterization was first obtained in [133], under the assumption that T (·) is nonnegative. Later, this technical restriction was removed in [134]. Analytical formulas of α c and ρ * (α) are available for any given T (·), which can be found in [133,134].
The asymptotic prediction given in Theorem 30 reveals a phase transition phenomenon: there is a critical sampling ratio α c that marks the transition between two very contrasting regimes.
• An uncorrelated phase takes place when the sampling ratio α < α c . Within this phase, ρ(x , x 0 ) → 0, meaning that the spectral estimate is uncorrelated with the target.

Precise asymptotic characterization and phase tran trieval
The Davis-Kahan and Wedin sin ⇥ Theorems are broadly applicable and conven fall short of providing the tightest estimates. For many problems, if one examin models carefully, it is often possible to obtain much more precise performance gu In [129], Lu and Li provided an asymptotically exact characterization of context of generalized linear regression, which subsumes phase retrieval as a spec the quality of this eigenvector is via the squared cosine similarity which measures the (squared) correlation between the truth x \ and the initiali is a fixed function th Remark 10. The above characterization was first obtained in [129], under nonnegative. Later, this technical restriction was removed in [130]. Analytical available for any given T (·), which can be found in [129,130].
The asymptotic prediction given in Theorem 30 reveals a phase transition ph sampling ratio ↵ c that marks the transition between two very contrasting regim • An uncorrelated phase takes place when the sampling ratio ↵ < ↵ c . With meaning that the spectral estimate is uncorrelated with the target.
• A correlated phase takes place when ↵ > ↵ c . Within this phase, the spect than a random guess. Moreover, there is a nonzero gap between the 1st a Y T , which in turn implies that x 0 can be efficiently computed by the pow The phase transition boundary ↵ c is determined by the preprocessing funct arises as to which preprocessing function optimizes the phase transition point. be a challenging task, as it is an infinite-dimensional functional optimization p can be analytically determined using the asymptotic characterizations stated in Theorem 31 (Optimal preprocessing [130] where T ⇤ (y) = 1 1/y.  (135), where the parameter γ has been optimized for each fixed α; (Green curve) the function T * α (·) defined in (141); (Blue curve) the uniformly optimal design T * (·) in (142).
• A correlated phase takes place when α > α c . Within this phase, the spectral estimate is strictly better than a random guess. Moreover, there is a nonzero gap between the 1st and 2nd largest eigenvalues of Y T , which in turn implies that x 0 can be efficiently computed by the power method.
The phase transition boundary α c is determined by the preprocessing function T (·). A natural question arises as to which preprocessing function optimizes the phase transition point. At first glance, this seems to be a challenging task, as it is an infinite-dimensional functional optimization problem. Encouragingly, this can be analytically determined using the asymptotic characterizations stated in Theorem 30.
The value α * is called the weak recovery threshold. When α < α * , no algorithm can generate an estimate that is asymptotically positively correlated with x . The function T * α (·) is optimal in the sense that it approaches this weak recovery threshold.
Another way to formulate optimality is via the squared cosine similarity in (139). For any fixed sampling ratio α > 0, we seek a preprocessing function that maximizes the squared cosine similarity, namely, The following theorem, derived by Lu et al., shows that the fixed function T * (·) is in fact uniformly optimal for all sampling ratio α. Therefore, instead of using (141) which takes different forms depending on α, one should directly use T * (·).
Finally, to demonstrate the improvements brought by the optimal preprocessing function, we show in Fig. 4 the results of applying the spectral methods to estimate a 64 × 64 cameraman image from phaseless measurements under Poisson noise. It is evident that the optimal design significantly improves the performance of the method.

Notes
The idea of spectral methods can be traced back to the early work of Li [135], under the name of Principal Hessian Directions for general multi-index models. Similar spectral techniques were also proposed in [17,131], for initializing algorithms for matrix completion. Regarding phase retrieval, Netrapalli et al. [110] used this method to address the problem of phase retrieval, the theoretical guarantee of which was tightened in [18]. Similar guarantees were also provided for the randomly coded diffraction pattern model in [18]. The first order-wise optimal spectral method was proposed by Chen and Candès [20], based on the truncation idea. This method has multiple variants [81,82], and has been shown to be robust against noise. The precise asymptotic characterization of the spectral method was first obtained in [133]. Based on this characterization, [134] determined the optimal weak reconstruction threshold for spectral methods.
Finally, the spectral method has been applied to many other problems beyond the ones discussed here, including but not limited to community detection [136,137], phase synchronization [9], joint alignment [10], ranking from pairwise comparisons [138,139], tensor estimation [140][141][142]. We have to omit these due to the space limit.

Global landscape and initialization-free algorithms
A separate line of work aims to study the global geometry of a loss function f (·) over the entire parameter space, often under appropriate statistical models of the data. As alluded by the warm-up example in Section 3.2, such studies characterize the critical points and geometric curvatures of the loss surface, and highlight the (non-)existence of spurious local minima. The results of the geometric landscape analysis can then be used to understand the effectiveness of a particular optimization algorithm of choice.

Global landscape analysis
In general, global minimization requires one to avoid two types of undesired critical points: (1) local minima that are not global minima; (2) saddle points. If all critical points of a function f (·) are either global minima or strict saddle points, we say that f (·) has benign landscape. Here, we single out strict saddles from all possible saddle points, since they are easier to escape due to the existence of descent directions. 13 Loosely speaking, nonconvexity arises in these problems partly due to "symmetry", where the global solutions are identifiable only up to certain global transformations. This necessarily leads to multiple indistinguishable local minima that are globally optimal. Further, saddle points arise naturally when interpolating the loss surface between two separated local minima. Nonetheless, in spite of nonconvexity, a large family of problems exhibit benign landscape. This subsection gives a few such examples.

Two-layer linear neural network
A straightforward instance that has already been discussed is the warm-up example in Section 3. It can be slightly generalized as follows.
Example 3 (Two-layer linear neural network [35]). Given arbitrary data {x i , y i } m i=1 , x i , y i ∈ R n , we wish to fit a two-layer linear network (see Fig. 5) using the quadratic loss: where A, B ∈ R n×r with r ≤ n, and X := [x 1 , · · · , x m ] and Y := [y 1 , · · · , y m ].
In this setup, [35]  In particular, when X = Y , Example 3 reduces to rank-r matrix factorization [or principal component analysis (PCA)], an immediate extension of the rank-1 warm-up example. When X = Y , Example 3 is precisely the canonical correlation analysis (CCA) problem. This explains why both PCA and CCA, though highly nonconvex, admit efficient solutions.

Matrix sensing and rank-constrained optimization
Moving beyond PCA and CCA, a more nontrivial problem is matrix sensing in the presence of RIP. The analysis for this problem, while much simpler than other problems like phase retrieval and matrix completion -is representative of a typical strategy for analyzing problems of this kind.
Proof of Theorem 33. For conciseness, we focus on the rank-1 case with M = x x and show that all local minima are global. The complete proof can be found in [30,31]. Consider any local minimum x of f (·). This is characterized by the first-order and second-order optimality conditions Without loss of generality, assume that A typical proof idea is to demonstrate that: if x is not globally optimal, then one can identify a descent direction, thus contradicting the local optimality of x. A natural guess of such a descent direction would be the direction towards the truth, i.e. x − x . As a result, the proof consists in showing that: when the RIP constant is sufficiently small, one has unless x = x . Additionally, the value of (144) is helpful in upper bounding λ min (∇ 2 f (x)) if x is a saddle point. See Appendix E for details.
We take a moment to expand on this result. Recall that we have introduced a version of strong convexity and smoothness for f (·) when accounting for global orthogonal transformation (Section 5.1.2). Another way to express this is through a different parameterization which clearly satisfies g(XX ) = f (X). It is not hard to show that: in the presence of the RIP, the Hessian ∇ 2 g(·) is well-conditioned when restricted to low-rank decision variables and directions. This motivates the following more general result, stated in terms of certain restricted well-conditionedness of g(·). One of the advantages is its applicability to more general loss functions beyond the squared loss.
Theorem 34 (Global landscape for rank-constrained problems [31]). Let g(·) be a convex function. Suppose that minimize admits a solution M with rank(M ) = r < n. Assume that for all M and D with rank(M ) ≤ 2r and rank(D) ≤ 4r, holds with β/α ≤ 3/2. Then the function f (X) = g(XX ), where X ∈ R n×r , has no spurious local minima, and all saddle points of f (·) are strict saddles.

Phase retrieval and matrix completion
Next, we move on to problems that fall short of restricted well-conditionedness. As it turns out, it is still possible to have benign landscape, although the Lischiptz constants w.r.t. both gradients and Hessians might be much larger. A typical example is phase retrieval, for which the smoothness condition is not well-controlled (as discussed in Lemma 5).
Theorem 35 (Global landscape for phase retrieval [27]). Consider the phase retrieval problem (27). If the sample size m n log 3 n, then with high probability, there is no spurious local minimum, and all saddle points of f (·) are strict saddles.
Further, we turn to the kind of loss functions that only satisfy highly restricted strong convexity and smoothness. In some cases, one might be able to properly regularize the loss function to enable benign landscape. Here, regularization can be enforced in a way similar to regularized gradient descent as discussed in Section 5.2.2. In the following, we use matrix completion as a representative example.
Theorem 36 (Global landscape for matrix completion [29,145,146]). Consider the problem (32) but replace f (·) with a regularized loss with λ > 0 a regularization parameter, and G 0 (z) = max{z − α, 0} 4 for some α > 0. For properly selected α and λ, if the sample size n 2 p n max{µκr log n, µ 2 κ 2 r 2 }, then with high probability, all local minima X of f reg (·) satisfies XX = X X , and all saddle points of f reg (·) are strict saddles.
Remark 11. The study of global landscape in matrix completion was initiated in [29]. The current result in Theorem 36 is taken from [146].

Over-parameterization
Another scenario that merits special attention is over-parametrization [147,148]. Take matrix sensing and phase retrieval for instance: if we lift the decision variable to the full matrix space (as opposed to using the low-rank decision variable), the resulting optimization problem is where M is the true low-rank matrix. Here, A i = a i a i for phase retrieval. As it turns out, under minimal sample complexity, f op (·) does not have bad local minima even though we over-parametrize the model significantly.
Theorem 37 (Over-parametrized matrix sensing and phase retrieval). All local minima of the above function f op (·) are globally optimal in the following two problems: • matrix sensing (the positive semidefinite case where M 0), as long as A satisfies 4r-RIP with a RIP constant δ 4r ≤ 1/5; • phase retrieval, as long as m ≥ c 0 n for some sufficiently large constant c 0 > 0.
As an important implication of Theorem 37, even the over-parametrized optimization problem is amenable to computation. This was made precise by [148], which shows that running GD w.r.t. the over-parameterized loss f op (·) also (approximately) recovers the truth under roughly the same sample complexity, provided that a near-zero initialization is adopted.
Proof of Theorem 37. Define the function The key is to recognize that: for the two cases considered in this theorem, the set is a singleton {M }; see [31,149,150].
To see why this result is helpful, consider any local minimum X of f op (·). By definition, there is an -ball Now suppose that X (resp. X X ) is not a global solution of f op (·) (resp. g(·)), then there exists aXX (resp.X) arbitrarily close to X X (resp. X ) such that f op (X) = g(XX ) < g(X X ) = f op (X ).
For instance, one can takeXX = (1 − ζ)X X + ζX X with ζ 0. This implies the existence of aX that contradicts (148), thus establishing the claim.

Beyond low-rank matrix factorization
Finally, we note that benign landscape has been observed in numerous other contexts beyond matrix factorization. While they are beyond the scope of this paper, we briefly mention two important cases based on chronological order: • Dictionary learning [151,152]. Observe a data matrix Y that can be factorized as Y = AX, where A is a square invertible dictionary matrix and X encodes the sparse coefficients. The goal is to learn A. It is shown that a certain smooth approximation of the 1 loss exhibits benign nonconvex geometry over a sphere.
• M-estimator in statistical estimation [153]. Given a set of independent data points {x 1 , · · · , x n }, this work studied when the empirical loss function inherits the benign landscape of the population loss function. The results provide a general framework for establishing uniform convergence of the gradient and the Hessian of the empirical risk, and cover several examples such as binary classification, robust regression, and Gaussian mixture models.

Notes
The study of benign global landscapes dates back to the works on shallow neural networks in the 1980s [35] with deterministic data, if not earlier. Complete dictionary learning analyzed by Sun et al. [151,152] was perhaps the first "modern" example where benign landscape is analyzed, which exploits measure concentration of random data. This work inspired the line of global geometric analysis for many aforementioned problems, including phase retrieval [27,85], matrix sensing [30,31,145], matrix completion [29,31,145,146], and robust PCA [145]. For phase retrieval, [27] focused on the smooth squared loss in (27). The landscape of a more "robust" nonsmooth formulation f (x) = 1 m m i=1 (a i x) 2 − (a i x ) 2 has been studied by [85]. The optimization landscape for matrix completion was pioneered by [29], and later improved by [145,146]. In particular, [146] derived a model-free theory where no assumptions are imposed on M , which accommodates, for example, the noisy case and the case where the truth is only approximately low-rank. The global landscape of asymmetric matrix sensing / completion holds similarly by considering a loss function regularized by the term g(L, R) := L L − R R . Last but not least, we caution that, many more nonconvex problems are not benign and indeed have bad local minima; for example, spurious local minima are common even in simple neural networks with nonlinear activations [154,155].

Gradient descent with random initialization
For many problems described above with benign landscape, there is no spurious local minima, and the only task is to escape strict saddle points and to find second-order critical points, which are now guaranteed to be global optima. In particular, our main algorithmic goal is to find a second-order critical point of a function exhibiting benign geometry. To make the discussion more precise, we define the functions of interest as follows Definition 9 (strict saddle property [156,157]). A function f (·) is said to satisfy the (ε, γ, ζ)-strict saddle property for some ε, γ, ζ > 0, if for each x at least one of the following holds: • (strong gradient) ∇f (x) 2 ≥ ε; • (negative curvature) λ min ∇ 2 f (x) ≤ −γ; • (local minimum) there exists a local minimum x such that x − x 2 ≤ ζ.
In words, this property says that: every point either has a large gradient, or has a negative directional curvature, or lies sufficiently close to a local minimum. In addition, while we have not discussed the strong gradient condition in the preceding subsection, it arises for most of the aforementioned problems when x is not close to the global minimum.
A natural question arises as to whether an algorithm as simple as gradient descent can converge to a second-order critical point of a function satisfying this property. Apparently, GD cannot start from anywhere; for example, if it starts from any undesired critical point (which obeys ∇f (x) = 0), then it gets trapped. But what happens if we initialize GD randomly?
A recent work [158] provides the first answer to this question. Borrowing tools from dynamical systems theory, it proves that: Theorem 38 (Convergence of GD with random initialization [159]). Consider any twice continuously differentiable function f that satisfies the strict saddle property (Definition 9). If η t < 1/β with β the smoothness parameter, then GD with a random initialization converges to a local minimizer or −∞ almost surely.
This theorem says that for a broad class of benign functions of interest, GD -when randomly initialized -never gets stuck in the saddle points. The following example helps develop a better understanding of this theorem. The GD rule is x t+1 = x t − η t Ax t . If η t ≡ η < 1/ A , then Now suppose that λ 1 (A) ≥ · · · ≥ λ n−1 (A) > 0 > λ n (A), and let E + be the subspace spanned by the first n − 1 eigenvectors. It is easy to see that 0 is a saddle point, and that x t → 0 only if x 0 ∈ E + . In fact, as long as x 0 contains a component outside E + , then this component will keep blowing up at a rate 1 + η|λ n (A)|. Given that E + is (n − 1)-dimensional, we have P(x 0 ∈ E + ) = 0. As a result, P(x t → 0) = 0.
The above theory has been extended to accommodate other optimization methods like coordinate descent, mirror descent, and the gradient primal-dual algorithm [159,160]. In addition, the above theory is generic and accommodates all benign functions satisfying the strict saddle property.
We caution, however, that almost-sure convergence does not imply fast convergence. In fact, there exist non-pathological functions such that randomly initialized GD takes time exponential in the ambient dimension to escape saddle points [161]. That being said, it is possible to develop problem-specific theory that reveals much better convergence guarantees. Once again, we take phase retrieval as an example.
Theorem 39 (Randomly initialized GD for phase retrieval [71]). Consider the problem (27), and suppose that m npoly log m. The GD iterates with random initialization x 0 ∼ N (0, with probability 1 − O(n −10 ). Here, c 3 , c 4 > 0 are some constants, T 0 log n, and we assume To be more precise, the algorithm consists of two stages: • When 0 ≤ t ≤ T 0 log n: this stage allows GD to find and enter the local basin surrounding the truth, which takes time no more than O(log n) steps. To explain why this is fast, we remark that the signal strength | x t , x | in this stage grows exponentially fast, while the residual strength x 2 does not increase by much.
• When t > T 0 : once the iterates enters the local basin, the 2 estimation error decays exponentially fast, similar to Theorem 6. This stage takes about O(log 1 ε ) iterations to reach ε-accuracy. Taken collectively, GD with random initialization achieves ε-accuracy in O log n + log 1 ε iterations, making it appealing for solving large-scale problems. It is worth noting that the GD iterates never approach or hit the saddles; in fact, there is often a positive force dragging the GD iterates away from the saddle points.
Furthermore, there are other examples for which randomly initialized GD converges fast; see [162,163] for further examples. We note, however, that the theoretical support for random initialization is currently lacking for many important problems (including matrix completion and blind deconvolution).

Generic saddle-escaping algorithms
Given that gradient descent with random initialization has only been shown to be efficient for specific examples, it is natural ask how to design generic optimization algorithms to efficiently escape saddle points for all functions with benign geometry (i.e. those satisfying the strict saddle property in Definition 9). To see why this is hopeful, consider any strict saddle point x (which obeys ∇f (x) = 0). The Taylor expansion implies for any ∆ sufficiently small. Since x is a strict saddle, one can identify a direction of negative curvature and further decrease the objective value (i.e. f (x + ∆) < f (x)). In other words, the existence of negative curvatures enables efficient escaping from undesired saddle points. Many algorithms have been proposed towards the above goal. Roughly speaking, the available algorithms can be categorized into three classes, depending on which basic operations are needed: (1) Hessian-based algorithms; (2) Hessian-vector-product-based algorithms; (3) gradient-based algorithms.
Remark 12. Caution needs to be exercised as this categorization is very rough at best. One can certainly argue that many Hessian-based operations can be carried out via Hessian-vector products, and that Hessian-vector products can be computed using only gradient information.
In each iteration, based on the estimate of the smallest eigenvalue of the current Hessian, the algorithm decides whether to move along the direction computed by Negative-Curvature-Descent (so as not to get trapped in saddle points), or to apply Almost-Convex-AGD to optimize an almost convex function. See [169] for details. This method converges to an ε-second-order stationary point in O(ε −7/4 log 1 ε ) steps, where each step involves computing a Hessian-vector product.
Another method that achieves about the same computational complexity is Agarwal et al. [171]. This is a fast variant of cubic regularization. The key idea is to invoke, among other things, fast multiplicative approximations to accelerate the subproblem of the cubic-regularized Newton step. The interested readers shall consult [171] for details.

Gradient-based algorithms
Here, an oracle outputs ∇f (x) for any given x. These methods are computationally efficient since only first-order information is required.
Ge et al. [156] initiated this line of work by designing a first-order algorithm that provably escapes strict saddle points in polynomial time. The algorithm proposed therein is a noise-injected version of (stochastic) gradient descent, namely, where ζ t is some noise sampled uniformly from a sphere, and ∇f (x t ) can also be replaced by a stochastic gradient. The iteration complexity, however, depends on the dimension polynomially, which grows prohibitively as the ambient dimension of x t increases. Similar high iteration complexity holds for [172] which is based on injecting noise to normalized gradient descent. The computational guarantee was later improved by perturbed gradient descent, proposed in [173]. In contrast to (154), perturbed GD adds noise to the iterate before computing the gradient, namely, with ζ t uniformly drawn from a sphere. The crux of perturbed GD is to realize that strict saddles are unstable; it is possible to escape them and make progress with a slight perturbation. It has been shown that perturbed GD finds an ε-second-order stationary point in O(ε −2 ) iterations (up to some logarithmic factor), and is hence almost dimension-free. Note that each iteration only needs to call the gradient oracle once. Moreover, this can be further accelerated via Nesterov's momentum-based methods [170]. Specifically, [174] proposed a perturbed version of Nesterov's accelerated gradient descent (AGD), which adds perturbation to AGD and combines it with an operation similar to Negative-Curvature-Descent described above. This accelerated method converges to an ε-second-order stationary point in O(ε −7/4 ) iterations (up to some log factor), which matches the computational complexity of [169,171].
There are also several other algorithms that provably work well in the presence of stochastic / incremental first-order and / or second-order oracles [175][176][177]. Interested readers are referred to the ICML tutorial [178] and the references therein. Additionally, we note that it is possible to adapt many of the above mentioned saddle-point escaping algorithms onto manifolds; we refer the readers to [152,166,179].
Finally, it is worth noting that: in addition to the dependency on ε, all of the above iteration complexities also rely on (1) the smoothness parameter, (2) the Lipschitz constant of the Hessians, and (3) the local strong convexity parameter. These parameters might depend on the problem size 16 . As a consequence, the resulting iteration complexities might be significantly larger than, say, O(ε −7/4 ) for some large-scale problems.
We have also left out several important nonconvex problems and methods in order to stay focused, including but not limited to (1) blind calibration and finding sparse vectors in a subspace [91,199,200]; (2) tensor decomposition and mixture models [201,202]; (3) parameter recovery and inverse problems in shallow and deep neural networks [163,[203][204][205][206]; (4) smoothed analysis of Burer-Monteiro factorization to semidefinite programs [207][208][209]. The interested readers are referred to [210] for an extensive list of further references.
Finally, we conclude this paper by singling out several exciting avenues for future research: • characterize generic landscape properties that enable fast convergence of gradient methods from random initialization; • relax the stringent assumptions on the statistical models underlying the data; for example, a few recent works studied nonconvex phase retrieval under more physically-meaningful measurement models [180,181]; • develop robust and scalable nonconvex methods that can handle distributed data samples with strong statistical guarantees; • and identify new classes of nonconvex problems that admit efficient optimization procedures.

A Proof of Theorem 3
To begin with, simple calculation reveals that for any z, x ∈ R n , the Hessian obeys (see also [30,Lemma 4.3]) With the notation (23) at hand, we can rewrite where the last line uses the symmetry of A i . To establish local strong convexity and smoothness, we need to control z ∇ 2 f (x)z for all z. The key ingredient is to show that: z ∇ 2 f (x)z is not too far away from g(x, z) := xx − x x , zz + 0.5 zx + xz 2 F , a function that can be easily shown to be locally strongly convex and smooth. To this end, we resort to the RIP (see Definition 6). When A satisfies 4-RIP, Lemma 3 indicates that where the last line holds if x − x 2 ≤ x 2 . Similarly, As a result, if δ 4 is small enough, then putting the above results together implies that: z ∇ 2 f (x)z is sufficiently close to g(x, z). A little algebra then gives (see Appendix B) for all z, which provides bounds on local strong convexity and smoothness parameters. Applying Lemma 1 thus establishes the theorem.
C Modified strong convexity for (45) Here, we demonstrate that when X is sufficiently close to X , then the objective function of f ∞ (·) of (45) exhibits the modified form of strong convexity in the form of (50). Set V = ZH Z − X . In view of (46), it suffices to show g(X, V ) := 0.5 XV + V X 2 F + XX − X X , V V > 0 for all X sufficiently close to X and all Z. To this end, we first observe a simple perturbation bound (the proof is omitted) for some universal constant c 1 > 0, provided that X − X F is small enough. We then turn attention to g(X , V ): The last line holds by recognizing that X ZH Z 0 [51, Theorem 2], which implies that Tr(X ZH Z X ZH Z ) ≥ 0 and Tr X ZH Z X X ≥ 0. Thus, g(X, V ) ≥ g(X , V ) − g(X, V ) − g(X , V ) ≥ 0.5σ r (X X ) V V F as long as X − X F ≤ σr(X X ) V V 2 F 2c1 X F · V F . In summary,

D Proof of Theorem 22
We start with the rank-r PSD case, where we denote by X 0 the initial estimate (i.e. X 0 = L 0 = R 0 for this case) and X the ground truth. Observe that where the second line follows since X 0 X 0 is the best rank-r approximation of Y (and hence X 0 X 0 − Y ≤ X X − Y ), and the last inequality follows from Lemma 8. A useful lemma from [23,Lemma 5.4] allows us to directly upper bound dist 2 (X 0 , X ) by the Euclidean distance between their low-rank counterparts.
In fact, [119] shows a stronger consequence of the RIP: one can improve the left-hand-side of (161) to X 0 X 0 − M F , which allows relaxing the requirement on the RIP constant to δ 2r √ ζ/( √ rκ) [23]. The asymmetric case can be proved in a similar fashion.

E Proof of Theorem 33 (rank-1 case)
Before proceeding, we first single out an immediate consequence of the first-order optimality condition (143a) that will prove useful. Specifically, for any critical point x, This fact basically says that any critical point of f (·) stays very close to the truth in the subpace spanned by this point.
To verify (144), we observe the identity where the last line follows from (143a) with a little algebra. This combined with (25) gives We then need to show that (164) is negative. To this end, we quote an elementary algebraic inequality [30,Lemma 4.4] x(x − x ) This taken collectively with (164) and (163) yields Proof of Claim (163). To see why (163) holds, observe that