Fixed-Time Gradient Dynamics With Time-Varying Coefficients for Continuous-Time Optimization

In this article, we propose fixed-time gradient dynamics with time-varying coefficients for continuous-time optimization. We first investigate the Lyapunov stability conditions that allow us to achieve fixed-time stability of the time-varying dynamical systems. We then use them to deal with continuous-time optimization problems. We show that under the proposed fixed-time gradient dynamics and by choosing time-varying coefficients, the searching trajectories converge to their optima in fixed-time from any initial points with a very fast rate. Simulation results are given to show the effectiveness of the proposed fixed-time gradient dynamics with tunable time-varying coefficients for continuous-time optimization.


I. INTRODUCTION
Dynamical systems approach to address the continuous-time optimization problems has attracted much attention from very early days [1], [2] and become a growing interdisciplinary research area, see, e.g., [3] and [4]. It has provided deep insights into optimization mechanisms and has been recognized as a valuable tool for developing and studying numerical algorithms for optimization problems. One of the most popular algorithms is the gradient descent (GD), which is used for solving the optimization problems. It has also been frequently used for optimization applications, such as the training of weights in the artificial neural networks [5]. However, one major drawback of the GD algorithms is that it converges slowly and can easily be trapped into local optima. Consequently, there has been much research on the theory and practice of accelerated first-order schemes, see [5], [6], and [7].
In many industrial applications, such as high precision robot control systems, it is desirable to have the trajectory of a dynamical system to converge to the system equilibrium in finite-time rather than merely asymptotically. Mathematically speaking, asymptotic stability means the closer to the equilibrium, the slower the convergence, resulting in the equilibrium never reached exactly. By introducing terms with fractional power, we can have the situation: the closer to the equilibrium, the faster the convergence. Recently, using dynamics with finite-time convergence as a GD mechanism has received an increasing interest due to their ability to ramp up the convergence speed, and found many applications in fast control and optimization. The first work was in [8] where the finite-time control mechanism was studied. A rigorous foundation for the theory of finite-time stability of continuous autonomous systems was first provided in [9], and extended to solving many control problems, such as finite-time stabilization of nonlinear systems [10]. Other alternative finite-time stability-based control strategies include the supertwisting algorithm-based control [11] and terminal sliding mode control [12]. However, the settling time, or time of convergence, depends upon the distance of the initial condition from an equilibrium point. A stronger notion, called fixed-time stability, where the time intervals of convergence have uniform upper bounds for all initial values, was recently introduced in [13] where a sufficient condition was derived to ensure the fixed-time stability of uncertain linear plants.
Using the finite-time or fixed-time stability properties to address continuous-time optimization problems has been considered recently. In [14], two discontinuous normalized modifications of gradient flows were proposed to achieve finite-time convergence but the objective function needs to be twice continuously differentiable and strongly convex. Using the Polyak-Łojasiewicz (PL) inequality, Romero and Benosman [15] provided first-and second-order dynamical systems that achieve finite-time convergence to the minima of a given sufficiently regular cost function. They also proposed second-order gradient discontinuous flows with finite-time convergence guarantees for locally strongly convex (time-varying) cost functions in [16]. However, the finite-time convergences in [15] and [16] are only attained if the initial points are near the optimal one. To achieve fixed-time convergence, in [17], a gradient-flow scheme was proposed that yields convergence to the optimal point of a convex optimization problem within a fixed time from any given initial condition. While having upper bounds of convergence time is beneficial for industrial applications, (e.g., robots reaching target in guaranteed time regardless of uncertain loads), they may be quite conservative, meaning the actual reaching time may be much sooner than the upper bounds. Aldana-López et al. [18], using multiplicative constant strategy, showed that the upper bound can be adjusted as small as desired. Although the higher coefficients can make the method converge faster, the trajectories also can end up oscillating around the local minimum if the coefficients are too big. Therefore, the time-varying approach may be a good solution for this issue because its coefficients can start big but reduce as time passes. Analysis for time-varying systems has been done by the Lyapunov method [19].
In this work, we establish fixed-time gradient dynamics with timevarying coefficients to give one more flexibility and freedom to choose the coefficients to shape the contour of the convergence and to reduce the oscillations around local minimum. We first extend the Lyapunov function method to derive the Lyapunov stability conditions for fixed-time stability of time-varying dynamical systems. These conditions are then applied to construct dynamical systems with time-varying coefficients to deliver fixed-time convergence guarantees for continuous-time optimization problems. To the best of our knowledge, this is the first work on the fixed-time stability analysis of time-varying dynamical systems, which can be used to characterize typical continuous-time optimization algorithms.
The rest of this article is organized as follows. Section II provides the Lyapunov stability conditions for fixed-time stability of timevarying dynamical systems. Section III presents the fixed-time gradientbased method with time-varying coefficients (FGTC) and Newton-like method with time-varying coefficients (FNTC) for continuous-time optimization problems with fixed-time convergence. Some numerical simulations are presented to demonstrate the improved performance of our proposed algorithms.
Notations: The set of real numbers is denoted by R, the set of nonnegative real numbers by R + , and the set of the positive real numbers by R ++ . We use the symbol • for the composition of functions.

II. LYAPUNOV STABILITY CONDITIONS
In this section, we derive the Lyapunov stability conditions for fixed-time stability of time-varying dynamical systems. Consider the nonlinear time-varying dynamical system where x : [t 0 , +∞) → R n and g : [t 0 , +∞) × R n → R n is a continuous function. Definition 2.1 (cf., [9], [13]): We say that x * ∈ R n is an equilibrium of (1) if g(t, x * ) = 0 for all t ≥ t 0 . An equilibrium x * of (1) is said to be 1) Lyapunov stable if for every ε > 0, there exists δ = δ(ε, t 0 ) > 0 such that 2) uniformly stable if for every ε > 0, there exists δ = δ(ε) > 0, independent of t 0 , such that (2) is satisfied; 3) fixed-time convergence if for every x 0 ∈ R n , for every solution x(t) of (1), there exists T = T (t 0 ), independent of x 0 , such that for all t ≥ T , x(t) = x * ; 4) fixed-time stable if it is Lyapunov stable and fixed-time convergence; 5) uniformly fixed-time stable if it is uniformly stable and fixed-time convergence.
We now establish the Lyapunov stability conditions for fixed-time convergence of time-varying dynamical systems.
Now, the definition of T and the continuity of V imply that V (T ) = 0.
Combining with the assumption that V is nonnegative, we obtain that V (t) = 0 for all t ≥ T and so, . (4), Since H 2 is strictly increasing, t 1 < H −1 < +∞. By the definition of T 1 and the continuity of V and V (T 1 ) = 1. Combining (10) with (5), we obtain Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where Thus, where the last inequality is due to (10) and the fact that H 1 is strictly increasing. The conclusion then follows from (9), (12), and the monotonicity of H 1 and H 2 . Remark 2.3: Regarding Theorem 2.2, we note that condition (4) is satisfied as soon as +∞ t 0 h 2 (s)ds = +∞ and condition (5) is satisfied as soon as For the discussions in the rest of this article, given h 1 , h 2 : The following corollary provides some cases in which the upper bound of T in (6) can be explicitly computed.
In the following, we present two more special cases of h 1 and h 2 that we will use in the numerical simulation section.
Example 2.5: Let s 0 < 1 and Then, for t > s 0 , Then, Example 2.6: Let s 0 < 1 and Then, for t > s 0 , According to (6), we have On the other hand, inspired by Th.1 in [18] we can get a tighter upper bound for the settling time in this special case. Indeed, instead of assumptions (4) and (5), we assume that , where Γ(·) is the gamma function defined as Hence, .
Combining this with the fact that H 1 is strictly increasing and , we obtain that We now present the fixed-time stability theorem for the time-varying dynamical systems. Proposition 2.9 (Fixed-time stability): Under the hypotheses of Theorem 2.2, suppose further that there exists a continuous strictly increasing function φ : R + → R + satisfying φ(0) = 0 such that, for all t ≥ t 0 and any solution x(t) of (1) Then, the following hold. i) x * is a fixed-time stable equilibrium of (1) with settling time satisfying (6). ii) Suppose, further, that there exists a continuous increasing function ψ : R + → R + satisfying ψ(0) = 0 such that for all t ≥ t 0 Then, x * is uniformly fixed-time stable.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.

III. CONTINUOUS-TIME OPTIMIZATION PROBLEMS
In this section, we shall use the results in Section II to address the typical continuous-time optimization problem-the minimization problem as shown in the following: where f : R n → R is a continuously differentiable function. Two typical search algorithms, namely, the gradient-based method and the Newton-like method, are studied.

A. Gradient-Based Method: FTT Scheme
To solve (23), the following scheme was proposed in [22] where p > 2. The convergence rate for the solutions of this scheme was given as f (x(t)) − f * = O( 1 t p−1 ) provided that the level sets of f are bounded. Under the Lojasiewicz inequality, it is proved in [15] that the trajectories of dynamics (24) converge to a local minimizer of f in finite time. To get fixed-time convergence, Garg and Panagou [17] presented a modification of (24) where c 1 , c 2 > 0, p 1 > 2, and 1 < p 2 < 2. Inspired by (25), we are concerned with the dynamical system where 0 < p < 1, q < 0, and h 1 , h 2 : [t 0 , +∞) → R ++ are continuous. Here, we adopt a convention thatẋ(t) = 0 when ∇f (x(t)) = 0. Our algorithm allows more flexibility for parameters and we will show that by choosing appropriate time-varying h 1 (t) and h 2 (t), the convergence can be fixed-time and fast.
Then, the following hold. i) x ∈ R n is an equilibrium of (26) ⇐⇒ ∇f (x) = 0. ii) F is continuous on [t 0 , ∞) × R n and the solution of (26) exists.
(ii): By assumption, F is continuous at and so lim (t,x)→(t,x) F (t, x) = 0 = F (t, x). Hence, F is continuous on R + × R n and, by Th. 1.1 in [23] the solution of (26) exists.
We now consider the following condition. Assumption 3.2 (PL inequality): There exist 0 < θ < 1 and c > 0 such that, for all x ∈ R n where f * = min x∈R n f (x). This condition with θ = 1/2 was proposed by Polyak [24] and Łojasiewicz [25] in 1963. According to Appendix 2.3 in [26] the class of functions f (x) = g(Ax) for any strongly convex function g and any matrix A, e.g., f (x) = Ax − b 2 , satisfies Assumption 3.2 with θ = 1/2.
Thus, for all t ≥ T , we have f (x(t)) = f * and ∇f (x(t)) = 0, and soẋ(t) = 0 by Lemma 3.1. Therefore, x(t) is constant on [T, +∞), hence there exist a minimizer x * of f such that x(t) = x * for all t ≥ T . Finally, if f has unique minimizer x * , then f − f * is positive definite with respect to x * , and the proof is complete due to Remark 2.10.
Remark 3.4: Based on Proposition 3.3 in combination with Corollary 2.4, the upper bound of T in (6) for the following three typical cases can be estimated.

B. Newton-Like Method: FTNT Scheme
Newton-like methods are very effective in optimization [17], which can be improved using our method. Assume that f is twice differentiable. In [17], one Newton-like algorithm was provided if ∇ 2 f is invertible but it did not consider if ∇ 2 f is not invertible. In this section, we consider both cases, and two Newton-like methods are proposed in the following.
Proposition 3.5 (Fixed-time without PL inequality): Let 0 < p < 1 and q < 0. Suppose that ∇ 2 f is invertible and that (4) and (5) hold for c 1 := 2, c 2 := 2, α := 2−p 2 , and β := 2−q 2 . Then, regardless of x(0), the trajectories x(t) of converges to a critical point of f in a fixed-time T with Moreover, if f has a unique critical point x * , then x * is an uniformly fixed-time stable equilibrium of (30).
Remark 3.6: Based on Proposition 3.5, the fixed-times for the following typical cases can be obtained. 1) If h 1 (t) = c 1 and h 2 (t) = c 2 , then T ≤ 1 In the following, we shall extend the results abovementioned to the situation where there is no invertible Hessian matrix.

IV. SIMULATION STUDIES
In this article, two simulation studies are presented to demonstrate the effectiveness of the proposed method.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  where A ∈ R m×n and b ∈ R m . According to Polyak [24], f satisfies PL condition with θ = 1/2. Here, A ∈ R 200×200 and b ∈ R 200 are randomly chosen, and λ = 1. Fig. 1 compares the performance of schemes (26) and (30)  In the following, we consider the case when the coefficients are big, for example, choose h 1 = 2000 and h 2 = 10000 for FGCC, and h 1 (t) = 20 t+0.01 and h 2 = 100 t+0.01 for FGTC. Here, λ = 0.1 and m = n = 100. It can be seen that the parameters h 1 and h 2 of FGCC and FGTC are the same at t = 0 and after that the parameters h 1 and h 2 of FGTC are smaller than the ones of FGCC. However, Fig. 2 shows that FGCC ends up oscillating around the local minimum while FGTC is more stable and has better performance. Now we choose m = n = 20 and h 1 = 2000 and h 2 = 10 000 for FGCC, and h 1 (t) = 2000/(t + 0.97) and h 2 = 10 000/(t + 0.97) for FGTC. Then, all settling time estimate bounds T FGCC = T FGTC = 0.065. It can be seen in Fig. 3 that FGTC is more stable around the local minimum than FGCC.  where w represents weights for the columns of the n × 2 data matrix X, x i is the ith row of X, and y i ∈ {−1, 1} is its corresponding label.

V. CONCLUSION
In this article, we have extended the Lyapunov stability condition for fixed-time stability to the time-varying dynamical systems. We have then derived fixed-time gradient dynamics with time-varying coefficients for optimization problems. We have shown that the proposed approach can improve the performance of fixed-time algorithms, and its effectiveness has been illustrated by numerical simulations.