On the variability of the sample covariance matrix under complex elliptical distributions

We derive the form of the variance-covariance matrix for any affine equivariant matrix-valued statistics when sampling from complex elliptical distributions. We then use this result to derive the variance-covariance matrix of the sample covariance matrix (SCM) as well as its theoretical mean squared error (MSE) when finite fourth-order moments exist. Finally, illustrative examples of the formulas are presented.


I. INTRODUCTION
Suppose we observe independent and identically distributed (i.i.d.) complex-valued p-variate random vectors x 1 , . . . , x n ⊂ C p with mean µ = E[x i ] and positive definite covariance The (unbiased) estimators of Σ and µ are the sample covariance matrix (SCM) and the sample mean defined by The SCM is an integral part of many statistical signal processing methods such as adaptive filtering (Wiener and Kalman filters), spectral estimation and array processing (MUSIC algorithm, Capon beamformer) [1], [2], and adaptive radar detectors [3], [4], [5]. In signal processing applications, a typical assumption would be to assume that the data x 1 , . . . , x n follow a (circular) complex multivariate normal (MVN) distribution [6], denoted by CN (µ, Σ). However, a more general assumption would be to assume a Complex Elliptically Symmetric (CES) [7], [8] distribution, which is a family of distributions including the MVN distribution as well as heavier-tailed distributions such as the t-, K-, and the inverse Gaussian distribution that are commonly used in radar and array signal processing applications as special cases [9], [10], [8], [11].
In the paper, we study the complex-valued (unbiased) SCM for which we derive the variance-covariance matrix as well as the theoretical mean squared error (MSE) when sampling from CES distributions. We also provide a general expression for the variance-covariance matrix of any affine equivariant matrixvalued statistic (of which the SCM is a particular case). The results regarding the SCM extend the results in [12]  complex-valued case, where the variance-covariance matrix and MSE of the SCM were derived for real-valued elliptical distributions.
The structure of the paper is as follows. Section II introduces CES distributions. In Section III, we derive the variancecovariance matrix of any affine equivariant matrix-valued statistic when sampling from a CES distribution. In Section IV, we derive the variance-covariance matrix of the SCM and provide an application in shrinkage estimation. Section V concludes. All proofs are kept in the appendix.
Notation: We let I, 1, and e i denote the identity matrix, a vector of ones, and a vector whose ith coordinate is one and other coordinates are zero, respectively. The notations (·) * , (·) , and (·) H , denote the complex conjugate, the transpose, and the conjugate transpose, respectively. The notations H p , H p + , and H p ++ denote the sets of Hermitian, Hermitian positive semidefinite, and Hermitian positive definite p × pdimensional matrices, respectively. We use the shorthand notation var(A) = var(vec(A)) and pvar(A) = pvar(vec(A)) (see Section III for the definition of pvar), where vec(A) = (a 1 · · · a p ) is a vectorization of A = (a 1 · · · a p ). When there is a possibility for confusion, we denote by cov µ,Σ (·, ·) or E µ,Σ [·] the covariance and expectation of a sample from an elliptical distribution with mean vector µ and covariance matrix Σ. The p 2 × p 2 commutation matrix [13] is defined by K p,p = i,j e i e j ⊗e j e i , where ⊗ is the Kronecker product.
The notation d = reads "has the same distribution as", U(CS p ) denotes the uniform distribution on the complex unit sphere CS p = {u ∈ C p : u = 1} and R ≥0 = {a ∈ R : a ≥ 0}.

II. COMPLEX ELLIPTICALLY SYMMETRIC DISTRIBUTIONS
A random vector x ∈ C p is said to have a circular CES distribution if and only if it admits the stochastic representation where µ = E[x] is the mean vector, Σ 1/2 ∈ H p ++ is the unique Hermitian positive definite square-root of Σ, u ∼ U(CS p ), and r > 0 is a positive random variable called the modular variate. Furthermore r and u are independent. If the cumulative distribution function of x is absolutely continuous, the probability density function exists and is up to a constant of the form where g : R ≥0 → R >0 is the density generator. We denote this case by x ∼ CE p (µ, Σ, g). We assume that x has finite fourth-order moments, and thus we can assume without any arXiv:2108.08047v2 [math.ST] 9 Nov 2021 loss of generality that Σ is equal to the covariance matrix var(x) [8] (implying E[r 2 ] = p). We refer the reader to [8] for a comprehensive account on CES distributions. The elliptical kurtosis of a CES distribution is defined as Elliptical kurtosis shares properties similar to the kurtosis of a circular complex random variable. Specifically, if x ∼ CN p (µ, Σ), then κ = 0. This follows by noticing that 2 · r 2 ∼ χ 2 2p , and hence E[r 4 ] = p(p + 1) and consequently κ = 0 in the Gaussian case. The kurtosis of a complex circularly symmetric random variable x ∈ C is defined as . Similar to the real-valued case, κ has a simple relationship with the (excess) kurtosis [14, Lemma 3]: . . , p}. We note that the lower bound for the elliptical kurtosis is κ ≥ −1/(p + 1) [8].
Lastly, we define the scale and sphericity parameters The scale is equal to the mean of the eigenvalues. The sphericity measures how close the covariance matrix is to a scaled identity matrix. The sphericity parameter gets the value 1 for the scaled identity matrix and p for a rank one matrix.

ESTIMATES
In this section, we derive the variance-covariance matrix of any affine equivariant matrix-valued statistic. We begin with some definitions.
The covariance and pseudo-covariance [15] of complex random vectors x 1 and x 2 are defined as and together they provide a complete second-order description of associations between x 1 and x 2 . Then var(x) = cov(x, x) and pvar(x) = pcov(x, x) are called the covariance matrix and the pseudo-covariance matrix [15] The following result extends the result of [16] to the complex-valued case. Theorem 1. Let a random matrix A = (a ij ) ∈ H p have a radial distribution with finite second-order moments. Then, there exist real-valued constants σ, τ 1 and τ 2 with τ 1 ≥ 0 and τ 2 ≥ −τ 1 /p such that E[A] = σI with σ = E[a ii ] and var(A) = τ 1 I + τ 2 vec(I)vec(I) , pvar(A) = τ 1 K p,p + τ 2 vec(I)vec(I) , where A statisticΣ =Σ(X) ∈ H p based on an n × p data matrix X = x 1 · · · x n of n ≥ 1 observations on p complexvalued variables is said to be affine equivariant if holds for all A ∈ C p×p and a ∈ C p . Suppose that x 1 , . . . , x n ⊂ C p is a random sample from a CES distribution CE p (µ, Σ, g) and thatΣ = (σ ij ) ∈ H p is an affine equivariant statistic. ThenΣ has a stochastic decomposition whereΣ(Z) denotes the value ofΣ based on a random sample z 1 , . . . , z n ⊂ C p from a spherical distribution CE p (0, I, g).
This follows by writing X d = Z(Σ 1/2 ) + 1µ using (2) (where z i = r i u i ) and then applying (9). Affine equivariance together with the fact that z i d = Qz i for all unitary matrices Q indicate thatΣ(Z) has a radial distribution. This leads to Theorem 2 stated below.
There are many statistics for which this theorem applies. Naturally, a prominent example is the SCM, which we examine in detail in the next section. Other examples are the complex M -estimates of scatter discussed in [8] or the weighted sample covariance matrices R = 1 and u : R ≥0 → R ≥0 . In the special case, when u(s) = s, we obtain the fourth moment matrix used in FOBI [17] for blind source separation and in Invariant Coordinate Selection (ICS) [18].

IV. VARIANCE-COVARIANCE OF THE SCM AND AN EXAMPLE IN SHRINKAGE ESTIMATION
We now use Theorem 2 to derive the covariance matrix and the pseudo-covariance matrix as well as the MSE of the SCM when sampling from a CES distribution. This result extends [12, Theorem 2 and Lemma 1] to the complex case.
Theorem 3. Let the SCM S = (s ij ) be computed on an i.i.d. random sample x 1 , . . . , x n ⊂ C p from a CES distribution CE p (µ, Σ, g) with finite fourth-order moments and covariance matrix Σ = var(x i ). Then, the covariance matrix and pseudocovariance matrix of S are as stated in (11)  where κ is the elliptical kurtosis in (4). The MSE is given by and the normalized MSE is where γ is the sphericity parameter defined in (6). The finite sample performance of the SCM can often be improved by using shrinkage covariance matrix estimators, which is an active area of research, see e.g., [19], [20], [21], [22], [12], [23], [24]. Consider the simple shrinkage covariance matrix estimation problem, Since the problem is convex, we can find β o as solution of where we used (14), it follows that the optimal scaling term β o is always smaller than 1 since MSE(S) > 0. Furthermore, β o is a function of γ and κ via (13). In the Gaussian case (κ = 0), we obtain β o = (n − 1)/(n − 1 + p/γ). Figure 1 illustrates the effect of κ on β o when γ = 2, and n = p = 10. Next we show that the oracle estimator S o = β o S is uniformly more efficient than S, i.e., MSE(S 0 ) < MSE(S) for any Σ ∈ H p ++ . First write

Subsituting this into (15) we get
, where the last identity follows from fact that 1/β o = 1 + NMSE(S) due to (14). Since β o < 1 for all Σ ∈ H p ++ , it follows that S o is more efficient than S. Efficiency in the case when γ and κ, and hence β o need to be estimated, remains (to the best of our knowledge) an open problem. However, for certain related shrinkage estimators the shrinkage intensity can be consistently estimated, e.g., [19], [20].
In the univariate case (p = 1), Σ is equal to the variance σ 2 = var(x) > 0 of the random variable x ∈ C and the SCM reduces to the sample variance defined by s 2 = 1 n−1 n i=1 |x i −x| 2 . In this case, γ = 1, and β o in (14) is A similar result was noticed in [25] for the real-valued case. If the data is from a complex normal distribution CN (µ, σ 2 ), then kurt(x) = 0, and β o = (n − 1)/n, and hence s 2 o = β o s 2 = 1 n n i=1 |x i −x| 2 , which equals the Maximum Likelihood Estimate (MLE) of σ 2 . In the real case, the optimal scaling constant is β o = (n − 1)/(n + 1) for Gaussian samples [26]. Note that when the kurtosis is large and positive and n is small, then β o can be substantially less than one and the gain of using s o can be significant.

V. CONCLUSION
We derived the form of the variance-covariance matrix for any affine equivariant matrix-valued statistics under sampling from CES distributions. We used this result to derive the variance-covariance matrix and the MSE of the SCM when finite fourth-order moments exist. An illustrative example in the context of shrinkage covariance matrix estimation was presented.

C. Proof of Theorem 3
The proof is similar to the proof of [12, Theorem 2] that was derived for the real-valued case. Write the SCM as S = (s ij ) = (n − 1) −1 X HX * , where H = I − 1 n 11 is the centering matrix. Write a = Xe q and b = Xe r for q = r. Then note that s qr = (n − 1) −1 a Hb * . Hence, Since for x i ∼ CE p (0, I, g) we have we can write a = Xe q = (r 1 u 1q , r 2 u 2q , . . . , r n u nq ) , and similarly for b. The klth element of the ijth block (i.e., the ijklth element) of the n 2 × n 2 matrix var vec b * a is while all other moments up to fourth-order vanish. This and the fact that E r 4 i = (1 + κ)p(p + 1) due to (4) This together with (16) and (18)  The expression in the expectation is equal to (19), and so where the last identity follows from using tr(Σ * ⊗ Σ) = tr(Σ) 2 . This gives the stated expression for the MSE.