Designing Sequences With Minimized Mean Sidelobe Level for Cognitive Radars

In this paper, a set of sequences is developed with a minimized mean sidelobe level (MSL). The problem is formulated for cognitive radars using the $l_{1}$ -norm, where the target cognition determines which sidelobes should be suppressed. The cognitive radar configuration requires fast waveform regeneration in each cognition cycle. In this light, the computational burden of the algorithms developed here is revealed to be situated on the singular value decomposition (SVD) operation. The randomization method is adopted to speed up the proposed algorithms. The obtained fast generation of sequences with the desired autocorrelation is key to its utilization in cognitive radars. We also consider two practically important cases and accommodate the proposed approach to them: unimodular and finite-alphabet sequences. The superiority of the developed algorithms is confirmed both in suppression level and speed through extensive numerical simulations.


I. INTRODUCTION
Sequence design with low autocorrelation sidelobes has emerged as an important topic in signal processing. This is because transmitting such a sequence can result in low false detections at the receiver. Radar waveform design comprises a large number of studies, targeting different aspects of the problem. Traditional studies have focused on binary and polyphase methods. Baker [1], Golomb [2], Frank [3] codes, and their contemporaries [4] are a few to enumerate in this category. Two major weaknesses can be observed in traditional approaches. Firstly, the suppression level has equal emphasis along all sidelobes. New developments unfold that considering unequal suppression may result in significantly more suppression levels [5]. This approach is even more beneficial when also considering cognition in radar systems. There, only part of the sidelobe is required to be suppressed since the whereabouts of the target are known to some extent through cognitive perception. Secondly, traditional approaches do not consider increasing the number of allowed phases. Besides, increasing the number of phases and assuming arbitrary complex sequences, i.e., continuous phase, can result in much lower sidelobes (see also [5]- [8]).
The associate editor coordinating the review of this manuscript and approving it for publication was Guolong Cui .
Inspired by nature, cognitive radar introduces a new way of thinking about the radar systems [33], [34]. Many authors VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ investigated various aspects of this idea [35], [36]. Particularly, its waveform design is mainly investigated based on several types of cognitions [34]: range, Doppler, and Doppler-range. In the latter, gathered cognition about the target can prioritize some regions of the Doppler-range plane. Ambiguity function (AF) is designed by some authors to utilize this cognition [20], [37]- [40]. The autocorrelation [41], [42] and spectral power [43] are the focus for the first two types. Here, we focus on the latter type, where we observe such systems require two features: 1) the ability to design autocorrelation based on the desired pattern and 2) high speed to regenerate the waveform in each cognition cycle. This necessitates developing new algorithms to arbitrarily and quickly design autocorrelation. In this regard, the fast algorithm with the ability to arbitrarily suppress autocorrelation sidelobes would be a suitable waveform design candidate for a cognitive radar system. An in-depth analysis of computational complexity would also constitute a cornerstone to reach faster algorithms. In this article, we investigate using range cognition for designing radar waveform. Solicited approaches like considering MSL and using CA are adopted to reach superior suppression levels and low time/computational complexity compared to high-performing benchmark algorithms. Compared with the literature, the major characteristics of the methods developed here are: • MSL minimization is formulated and solved through the CA. Its corresponding generic problem is revealed to be a geometric median problem, where Weiszfeld's algorithm can be applied. Furthermore, the unimodularity constraint is separated from the problem under study, resulting in an algorithm able to suppress more than half of the sidelobes to almost zero. This rare ability is particularly desirable when the cognitive radar is detecting several targets and needs a wide range of sidelobes to be suppressed.
• The computational burden of the proposed algorithms is determined to rest upon two points: the embedded SVD and the Weiszfeld's algorithm. Accordingly, fast-randomized SVD and loose criterion selection are applied to speed up the proposed algorithms. The presented algorithms' computational complexity is also obtained mathematically and revealed to be proportional to the sequence length and the desired number of suppression lags. The significant speed up in the generation process also is confirmed through numerical simulations. The fast generation waveform is particularly appealing to cognitive radars, where the waveform should be regenerated every time the target cognition is gathered. Besides, the in-depth computational complexity analysis can be a basis for future alike algorithms.
• We further considered restricting the developed algorithms to finite-alphabet constellations. Having a finite-alphabet waveform would significantly simplify the implementation of the radar system.
In this paper, regular and bold lower cases are used to represent a scalar and vector variable, respectively; bold uppercase is reserved for matrices. The notations (.) * and (.) H denote complex conjugate and Hermitian (i.e., conjugate transpose), respectively. The |.| and |.| 1 represent absolute value (i.e., Euclidean norm) and Taxicab norm for a complex-valued scalar variable. That is, |a| 1 = | {a}| + | {a}|, where {a} and {a} represent the real and imaginary parts of the complex-valued a. The a 1 = i |a i | is the l 1 or Taxicab norm of the vector a. Also, A 1,1 represents the entry-wise Taxicab or l 1 -norm for matrix A, defined by (see [44], Eq. 1.4.47) where vec(A) is the vectorized format of A. This notation is consistent with the L p,q entrywise norm of the matrix A defined by: Finally, e i represents the i-th canonical vector, that is, a vector with all elements equal to zero except for the i-th, being equal to one.
The paper is organized as follows. In Section II, the problem is formulated in a general form and solved through the CA. In Section III, some improvements are discussed, and the algorithms are analyzed. In particular, the imposition of the unimodularity constraint is argued in subsection III-A, and subsection III-C is devoted to developing a speedup solution for a long sequence design. Consequently, the simulation results are presented in Section IV. Finally, conclusions are given in Section V.

II. PROBLEM FORMULATION
Consider a complex-valued sequence {x n } N n=1 . This sequence can constitute the waveform transmitted from cognitive radar or more generally a mono-static radar. The autocorrelation for this sequence is Suitability of the autocorrelation function can be described through several metrics including ISL, PSL and MSL. The first is defined by and can be considered as Euclidean norm on autocorrelation sidelobes. Since this norm is the most canonical, minimizing ISL is the most popular approach for sequence design [5], [45]. Some studies show that minimizing other metrics like PSL can suppress a large part of autocorrelation [8]. It is defined by The MSL is the least investigated metric and is defined by Akin to ISL, PSL and MSL can be thought of as the Chebyshev and Taxicab norms on the autocorrelation sidelobes, respectively. The PSL minimization is studied in [13]; but to the best of our knowledge, no study has been reported on the MSL minimization. We consider the weighted MSL (WMSL) given by where {λ k } N −1 k=1 are real-valued weights. The choice of this problem is based on two considerations. First, any cognition about the target position can be prioritize suppression in some of the autocorrelation lags. Using (7), such a cognition can be accommodated in λ k coefficients. Second, rapid sequence regeneration is required for cognitive radars since the sequence should be redesigned for gathered cognition.
To design sequences with minimized MSL, let be the Toeplitz matrix defined by values of target sequence The autocorrelation is obtained by Putting (6) and (9) together, the MSL minimization problem can be formulated as where r 0 is autocorrelation peak and can be set to r 0 = N if unimodularity or almost unimodularity is expected. Note that (10) and (6) are not equivalent but related. The relation is clarified in Remark 1. Due to the quadratic nature and non-differentiable behavior of the . 1,1 , a solution to this minimization problem cannot be found directly. Hence, similar to the methods developed in [5], [13], [45] for the ISL and PSL, minimizing the following metric is contemplated here where L H L = I. Therefore, the optimization problem in (10) is changed to the following constrained optimization, The problems in (10) and (12) are equivalent, in the sense that a sequence converging to the solution of approximate problem converges to the solution of the main problem as well. This statement is rigorously proved in [46] in a general scenario. Next, we consider a case where suppressing only part of the autocorrelation is desired. In particular, consider the case in which for some Q and is an ordered sequence in the interval 1, k Q . In this case, the matrix X should be altered properly so that only the important autocorrelation coefficients are created in (9). Accordingly, we define the new matrixX as follows (see [13], [45] for similar approaches), and, let T = e k i k Q ×Q , i = 1, 2 . . . Q, be columns of the identity matrix corresponding to the desired coefficients. Then, we haveX SubstitutingX instead of X in (9) results in the almost desired autocorrelation coefficients. Suppressing the predetermined part of the autocorrelation in the MSL sense, we have Figure 1 illustrates the regions of influence for X,X, andX. In particular, we observe that X considers all sidelobes whilẽ X is only dedicated to specific lags of interest.
Remark 1: (See the Remark 1 in [5] ) The above discussion is not completely equivalent to minimizing MSL but the weighted MSL for specific weights. In fact, when k Q = Q, VOLUME 9, 2021 we haveX When entrywise l 1 -norm is computed, the number of r k appearing in the summation is 2(k Q − k). Hence, (15) is equivalent to the WMSL minimization when λ k = 2(k Q − k). When Q < k Q , the weights cannot be computed directly, but can be regarded as proportional to cumulative occurrence of r k and r * k inX HX . Henceforth, the CA is adopted to solve (15). It is solved with respect to {x n } N n=1 and L cyclically. That is, given {x n } N n=1 , (15) is solved to achieve L, and vice versa. However, for the entrywise Taxicab norm, (15) does not possess a closed-form solution. Since just an improvement is required in each iteration, an approximate solution of L suffices, indicating that the solution of L can be obtained by solving the Euclidean version of (15) instead (see [5], [45] and references therein). LetX be the economy-sized or thin SVD [47] forX H , where (.) H denotes the conjugate transpose. Then, the solution of the Euclidean norm version of (15) is as follows The other step in the cycle is to find the solution of (15) with respect to {x n } N n=1 , with the assumption that L is a known constant. In order to achieve this goal, consider a specific element of {x n } N n=1 , say x, and let {µ k } be the sequence constituted by the elements of √ N L with the same position of x inX. Then, the generic form of the optimization of (15) is given by where, |.| denotes the absolute value of complex numbers. Interestingly, (19) is a geometric median problem in one dimension and can be solved by applying Weiszfeld's algorithm [48]. Accordingly, the optimal solution to this problem is where the following equilibrium is guaranteed, Equation (20) can be restated as Subsequently, by applying the following iteration, the optimal solution of (19) can be reached for almost all initial conditions (see [48] and references therein) Here, the superscript i denotes the iteration index. Consequently, the algorithm for minimizing MSL is given in Table 1.
Remark 2: If |.| is replaced by the l 1 −norm on complex numbers in (19), then it will be a completely different problem, known as sum-of-absolute-value problem, which can be solved by linear programming. The resulted algorithm can be applied to generate sequences with properly shaped autocorrelation as well. Accordingly, the Appendix is devoted to solving for the l 1 −norm version of (19). Although this method is not developed here completely to result in a final algorithm, the same approach can be adopted and developed for MIMO configuration, which is a good topic for future research.

III. DISCUSSIONS AND IMPROVEMENTS
Note that no deficiency is imposed on the suitability of sequences. On the contrary, the unimodularity condition is equivalent to the minimum possible peak-to-average power ratio (PAPR), which is a suitability index for the designed sequences (see [13] and references therein). However, this restriction is revealed to translate to lower computing speed. Our preference is, therefore, to consider unimodularity separately. It is required to impose unimodularity condition on (10), (15), and (19) in the above-mentioned argument To design unimodular sequences. Therefore, the following models are obtained.
To design unimodular sequences, the only required alteration is that the constrained minimization in (26) is solved instead of (19). Accordingly, equation (26) can be restated as, where, x 1 and x 2 are real and imaginary parts of the generic x. Observe that the feasible set {(x 1 , x 2 )|x 2 1 + x 2 2 = 1} is not convex. Therefore, the Karush-Kuhn-Tucker (KKT) cannot guarantee the optimality of the its solution for this problem. The preference here is rather to apply a nearly optimal unimodular solution. To this end, it is adequate to find the solution of min (19)), then to project it onto the unit circle. Consequently, the algorithm for designing a unimodular sequence with minimized MSL can be derived.
Step 4 of MeSiCA in Table 1 becomes:   4a For the sequences {µ k } and for some initial value (e.g., x 0 n = 0), compute the iteration in (22) till some stop criterion is satisfied (e.g., x i+1 n − x i n < δ) and the solution to (20), y * , is achieved 4b Normalize the solution through x * = y * |y * |

B. RESTRICTING TO A FINITE-ALPHABET CONSTELLATION
With the advancement of digital hardware, it is unequivocally plausible to consider the sequences with arbitrary phases. However, there are applications where restricting the designed sequences into a certain finite-alphabet constellation results in substantial implementation simplification. In this regard, MeSiCA can be generalized to the fixed-constellation scenario, where sequence elements are restricted to a fixed set of points. Bi-phase and polyphase signals are examples of such generalization. In this case, the design problem includes an additional constraint restricting the space from which sequence elements are selected: x n ∈ S, n = 1, . . . , N .
Here, S is the constellation. For instance, we have S = {1, −1} and S = {1, −j, −1, j} for bi-phase and quad-phase constellations, respectively. In order to cope the finite-element problem, the converging set notation [49] is applied. Consider the function f (x, k) : C × (N {0}) → C, the set of complex numbers C converges under the function f to the set S, if lim k→∞ f (x, k) ∈ S. This concept allows an iterative algorithm for designing continuous phase sequences to be converted to an algorithm with finite-alphabet outputs. Here, we restrict the argument to binary case. For more details on the theory of converging sets and other constellations see [49]. A suitable f can be constructed for binary case as where v is a parameter determining the speed of convergence. In this light, the finite-alphabet MeSiCA (FAMeSiCA) is different from the original MeSiCA in step 4, given below. We name special case of S = {−1, 1} as BiMeSiCA for binary MeSiCA. 4a For the sequences {µ k } and for some initial value (e.g., x 0 n = 0), compute the iteration in (22) till some stop criterion is satisfied (e.g., x i+1 n − x i n < δ) and the solution to (20), y * , is achieved 4b Compute x according to x = f (y * , k) 4c Set k = k + 1

C. RANDOMIZATION
The computational burden of the MeSiCA comes from two sources: the SVD (i.e., step 2 of each algorithm) and the iteration in step 4. For the latter, it is sufficient to choose a loose criterion. For the former, a method should be adopted to improve the computation speed of SVD. Active research exists on the fast computation of SVD [50], where randomization is applied. Consider the SVD ofX in (18), choose S Q < N , and let denote a Gaussian random matrix of the size (N + k Q − 1) × S, we have where is a Q × S matrix. By applying the QR factorization, we have such that be the Q × S orthogonal matrix. Consequently, define E as the following  Note that E is a matrix of much lower dimensions: S ×(N + k Q − 1), compared to the originalX, indicating that its SVD consumes much less time. Accordingly, let (33) be the SVD of the matrix E and let U 1 = Û 1 . In that case, it is known thatX can be approximated arbitrary close bỹ if the value of S is large enough [50], [51]. Based on this development, a faster version of MeSiCA is obtained. This algorithm is called RaMeSiCA where 'Ra' stands for 'Randomized'. The RaMeSiCA is enlightened in Table 3.

D. CONVERGENCE AND COMPLEXITY ANALYSIS
Convergence analysis of the algorithms developed here falls under the convergence properties of the general CA methods. Specifically, consider the real-valued target function (15). The two-step CA method divides the optimization problem into alterations between two separate minimization problems. This approach is reported in other applications with the name of alternating minimization [52], which its convergence is studied with more depth [53]. The essence of the convergence is based on the fact that solving the resulted sub-problem decreases the target function in each step. Moreover, since it is lower-bounded by zero, it should converge. Due to the similarity of the argument, we avoid repeating it here to preserve brevity. Empirical simulations also confirm the convergence of the CA (alternating minimization) in both the waveform design [45] and dictionary learning [54]. The computational complexity of the algorithms presented here is a function of the desired number of suppressed autocorrelation lags. In MeSiCA, complexity in the outer loop is dominated by computing the SVD in (17). Considering the structure of the matrixX H , the usual SVD requires O((N + K Q − 1)Q 2 ) operations. Applying the low-rank randomized SVD reduces this amount to O((N + K Q − 1)Q log(k)). Besides, the randomized SVD applied here requires no random access to the matrix, indicating that it even performs better for very large matrices [51], [55]. Beside SVD, the inner-loop given in (22) is a major source of time consumption in both MeSiCA and RaMeSiCA. In each iteration of the inner loop, two summations of length Q are computed for all N members of the sequence {x n }. Therefore, the innerloop's final cost is O(2NQK ), where K is the minimum number of operations needed for convergence of the inner loop. Note that the inner loop could also be implemented by defining its stopping criteria as i ≤ K . In brief, computational  Table 2 depicts complexity comparison between MeSiCA, MWISL [6], and WeCAN [5]. In fact, MWISL has a vector-matrix multiplication in step 9 which dominates its complexity with O(N 2 ); O(N log N ) and O(N ) comes from FFT and vector-scalar operations in other steps. Furthermore, per-iteration complexity is not improved by applying SQUAREM. The dominant part of the complexity of WeCAN comes from the matrix-vector multiplication in step 2. Table 2 reveals that MeSiCA can result in substantial superiority when K , Q N . In practice, sufficient convergence can be obtained by less than 10 inner-loop iterations (K < 10).

IV. SIMULATION RESULTS
In this section, the performance of the proposed algorithms is evaluated. The Barker [1] sequence is chosen here as a result of its prevalence. In order to have a fair evaluation, comparisons with the recent majorization-minimization (MM) method [6] are performed. All the simulations in this section are run on a Corr i5 machine with 16 GByte of random access memory (RAM) and 8 MByte of cache. In the following simulations, all algorithms start from the Golomb sequence unless explicitly specified. The following formula defines the Golomb sequence: This sequence is, by definition, unimodular and belongs to the family of polyphase sequences. It is worth mentioning that one of the advantages of the Golomb sequence is that it can be produced for any specified length of N . On the contrary, not all members of the polyphase family have this feature. For instance, the Frank sequence, another member of this family, can be just produced for square lengths. Nevertheless, in order to visualize the results of algorithms, some parameters should be defined here. Accordingly, we define normalized autocorrelation as Normalized Autocorrelation = 20log 10 r k r 0 , This definition is essentially the same as the correlation level given in [5]. However, the name normalized autocorrelation is preferred in order to avoid confusion. Furthermore, we define mean autocorrelation sidelobe level (MASL) as MASL = 20 log 10 Note that this criterion cannot measure the algorithm performance whenever suppressing a part of the autocorrelation is contemplated. In such a situation, we can apply the modified mean autocorrelation sidelobe level (MMASL), defined according to MMASL = 20 log 10 can be applied. We also compare the algorithms based on the PSL. In this regard, we define the modified peak sidelobe level (MPSL) by: This metric measures the PSL over the lags of interest: r k 1 , . . . , r k Q . All measures mentioned above are appropriate to describe the performance.

A. SIDELOBE SUPPRESSION ABILITY
In this section, the performance of newly introduced algorithms is compared with several benchmarking algorithms. The comparison with methods like Barker or Golomb is performed because they have been applied to many applications. The MeSiCA and RaMeSiCA are also compared with the monotonic minimizer for weighted ISL (MWISL) [6]. This algorithm exploits the MM to minimize a weighted ISL metric.

1) COMPARISON WITH BARKER (SUPPRESSING MORE THAN HALF OF THE SIDELOBES)
Barker is the most prevalent binary code desired for its autocorrelation properties. The maximum length of this bi-phase code is 13, where the corresponding code is It has the lowest possible autocorrelation level in the set of bi-phase codes with the same length. Figure 2 illustrates the comparison of normalized autocorrelation among the MeSiCA, RaMeSiCA, WISL, WeCAN and Barker 13. Here, the length is N = 13 while k Q = Q = N − 2 = 11 for all benchmark algorithms. The termination point for WeCAN is ε = 10 −8 , while the MWISL is run for 50000 iterations. Both of these values ensure the convergence of the algorithms. However, these two benchmark algorithms can only suppress the sidelobes down to almost zero if the lags of interest covers less than half of the sidelobes, as embraced by their authors [5], [6]. On the other hand, both MeSiCA and RaMeSiCA can suppress beyond half of the sidelobes except for the all-sidelobes scenario. To illustrate this, we first set ε = 3 × 10 −11 for both RaMeSiCA and MeSiCA. Interestingly, both algorithms generate the same autocorrelation. It is because both solve the same problem. However, the RaMeSiCA reaches the termination point in slightly fewer iterations: 54410, compared to 55919 for MeSiCA. It is worth mentioning that we didn't expect significant speedup at this stage since the length is low, while the RaMeSiCA is specifically designed for lengthy sequence speedups. We also included the MeSiCA and RaMeSiCA waveforms for ε = 3 × 10 −4 and ε = 3 × 10 −5 , respectively, to showcase the suppression for various ε-s. It is observed that by setting a lower value of , stronger suppression can be guaranteed at the expense of longer running time. The ultimate limit of , which is in the order of 10 −15 , is defined by MatLab for handling small numbers.
Meanwhile, it is worth noting that most of the akin algorithms developed in the literature, including those in [5], [6], are not able to suppress more than half of the sidelobes. The ability of MeSiCA rests on the fact that this algorithm is produced by relaxing the unimodularity constraint, compared with unimodular sequence design algorithms; this constraint confines the degree of freedom for the target sequence to the half as given in [5] and [6]. Since the difference between MeSiCA and RaMeSiCA is in the SVD part, the same behavior from RaMeSiCA is expected. Here, we further showcase this ability of MeSiCA compared to MWISL. Consider a case where N = 20, while suppressing the first L sidelobes are desired. We consider 5, 9, 11, and 15 values for L to particularly include values less, about, and more than the margin of half of the sidelobes or 10. Figure 3 illustrates the MWISL and MeSiCA performance for this example. Here, all MWISL scenarios are run for 50000 iterations. For MeSiCA, the ε is set to ε = 10 −12 except for the last L = 15, which is set to ε = 10 −11 . It is evident that the MWISL cannot suppress sidelobes down to almost zero for L = 11 and L = 15, while the MeSiCA can do the task for all scenarios. VOLUME 9, 2021 FIGURE 2. Comparison of the autocorrelation sidelobes among the barker 13 [1], MWISL [6], WeCAN [5], MeSiCA, and RaMeSiCA when N = 13 and suppressing the first 11 sidelobes is of interest.  [5,9,11,15] sidelobes when N = 20.

2) SUPPRESSING APART COEFFICIENTS
The MeSiCA can suppress apart coefficients as well. However, in order to suppress a specified part of autocorrelation, either {k i } should be calculated such that the cumulative occurrence of ones rests more on the desired coefficient lags in theXX H or proper values for {k i } should be found out through trial and error. In this regard, consider a situation where suppression of the first five autocorrelation lags and the lags from k = 40 to k = 50 is desired. It is sufficient to choose k Q = 50 and {k i } = {1, . . . , 5} ∪ {45, . . . , 50}. Each coefficient in r 1 , . . . , r 5 gets cumulative occurrence of two and each coefficient in r 40 , . . . , r 50 gets a single occurrence. The result of this configuration is demonstrated in Fig. 4 in blue, where the suppression level of the r 1 , . . . , r 5 is more than that of r 40 , . . . , r 50 as expected. To reach this depiction, ε is set to 0.3 × 10 −10 . This example also provides an opportunity to demonstrate the applicability of the finite-alphabet scenario. In this regard, MeSiCA and BiMeSiCA are compared for a certain number of iterations K = 2000 with the same configuration as above. It can be observed that BiMeSiCA is also able to suppress the desired coefficients. However, it cannot compete with MeSiCA in terms of suppression level. This is because it is restricted to the binary constellation.

3) SUPPRESSION ABILITY OF THE ALGORITHMS
Here, an example is contemplated for various iteration numbers to evaluate the performance of the proposed algorithms in minimizing the MMASL. Consider a problem where the suppression of the first 24 lags for the sequences of length 100 is contemplated. That is, For such a scenario, the MMASL metric is computed for the MeSiCA and RaMeSiCA. The result is compared with that of the MWISL algorithm [6]. Fig. 5 shows that an improvement of approximately 10 to 20 dB is achieved. The parameter δ is set to 0.01 for both of the proposed algorithms.

4) TIME PERFORMANCE OF THE ALGORITHMS
Although depicting the MMASL versus the number of iterations represents some aspects of the convergence of the developed algorithms, it cannot completely characterize the algorithms' speed. The speed of some steps in each iteration can be improved with special care. This is the motivation behind developing the RaMeSiCA. In order to compare the speed of the algorithms, suppression of r 0 , . . . , r 120 is contemplated for a sequence of the length N = 256, that is,  The suppression level of this scenario measured through the MMASL is depicted in Fig. 6, where the x-axis represents the time in seconds. The comparison among the MeSiCA and RaMeSiCA and the benchmarking algorithm, i.e., MWISL [6], reveals that an advantage of several orders of magnitudes is attainable through the newly developed algorithms. The advantage of RaMeSiCA comes from two facts: the fast-SVD and a deliberate choice of the parameter δ. Generally, trialand-error shows that choosing a loose value for the δ results in better performance for RaMeSiCA. For instance, in the scenario depicted in Fig. 6 this parameter is chosen to be δ = 0.1, for RaMeSiCA versus δ = 0.0001 for MeSiCA. Here, the δ for both algorithms are tuned by trail and error to get the best performance of the algorithm. Finally, the parameter k, i.e., number of rows in the matrix , is set to 4 for this simulation.
The MeSiCA and RaMeSiCA can also beat the benchmark algorithm in other metrics like PSL, although they are specifically designed for minimizing the MSL and should be expected to outperform other benchmarks in MSL only. To illustrate their superiority, we compare the MPSL performance among MeSiCA, RaMeSiCA, and MWISL for the same scenario as above. Figure 7 depicts this comparison, where it is observed that both MeSiCA and RaMeSiCA can significantly outperform MWISL in MPSL metric.

V. CONCLUSION
This article deals with designing sequences with a minimized mean sidelobe level (MSL). The MSL minimization is formulated and solved through the cyclic algorithm, where the Weiszfeld algorithm is applied to solve the resulting generic optimization problem. Additionally, imposing the practically important constraint of unimodularity criteria and restricting to finite-element constellations are argued. The computational burden of the developed algorithms is revealed to rest on the embedded SVD. The randomization method is proposed to speed up the SVD to improve the developed algorithms, particularly for generating long sequences. Simulation results affirm the superiority of the proposed methods both in suppression level and consumed time. The supremacy in speed and its ability to suppress the autocorrelation's desired part makes the presented methods suitable for cognitive radars.