Linear Version of Parseval’s Theorem

Parseval’s theorem states that the energy of a signal is preserved by the discrete Fourier transform (DFT). Parseval’s formula shows that there is a nonlinear invariant function for the DFT, so the total energy of a signal can be computed from the signal or its DFT using the same nonlinear function. In this paper, we try to answer the question of whether there are linear invariant functions for the DFT, and how they can be found, along with their potential applications in digital signal processing. In order to answer this question, we first prove that the only linear equations that are preserved by the DFT are its orthogonal projections. Then, using Hilbert space for adjoint operators, we propose an algorithm that computes all linear invariant functions for the DFT. These linear invariant functions are also shown to be useful and important in a variety of signal-processing applications, particularly for finding some boundaries for transformed signals without explicitly evaluating the DFT, and vice versa. Additionally, using the proposed identities, we demonstrate that the average of a circular auto-correlation function for a large class of signals is preserved by the DFT. Finally, the results reported in this paper are verified for several short-length and long-length DFTs, including a 256-point DFT.


I. INTRODUCTION
T HE discrete Fourier transform (DFT) as a Fourier representation of finite-length sequences plays a central role in a wide variety of signal-processing applications, including filtering and spectral analysis, since it can be explicitly computed by efficient algorithms, collectively called the fast Fourier transform (FFT). The principle results of this paper are based on the normalized version of the DFT, which they can be easily extended to other versions of the DFT. Since unitary matrices leave the length of a complex vector unchanged, this leads to Parseval's theorem which verifies energy conservation between time and frequency domains, In Eq. (1), for notational simplicity, the sequence x[n] and its normalized DFT sequence X [k] have been denoted by the following vectors, respectively: X [X[0], X [1], · · · , X[N − 1]] T .
In other words, Parseval's formula, as given in (1) k=0 |X[k]| 2 . As seen from (1), Parseval's theorem can be considered as one of the answers to the problem of finding nonlinear functions ϕ : C n −→ C that are invariant under the N -point DFT matrix F , that is, where ϕ(x) = x 2 . However, a more challenging problem is to find linear invariant functions under the DFT. Such invariant functions introduce linear version of Parseval's theorem. In this paper, it will be shown that invariant functions of a linear transformation T exist if one of the eigenvalues of T † , adjoint of T , is unity. Based on this, we introduce our novel algorithm to compute invariant functions. For the N -point DFT, we obtain N/4 + 1 basis linear Parseval identities, where N/4 denotes the integer part of N/4. Then, we derive some new identities that can identify more spectral properties of real signals. The new identities can show that the orthogonal projections of a time signal x and its transformed one X on the eigenspace of the eigenvalue λ = 1 are the same. Therefore, the projection-preserving property is the unique linear information which is invariant under the DFT. It is shown that the coefficients in the derived linear Parseval identities are the eigenvectors of the DFT corresponding to the eigenvalue λ = 1. Therefore, these identities show the importance of this very special eigenspace with more applications. There have been several different approaches to find eigenvectors of the DFT and related transformations. Constructing a basis for these eigenvectors was originally proposed in the spectral decomposition theory of the DFT, similar as McClellan and Parks did in [1]. The method of spectral decomposition has been applied to describe the eigenvalues and eigenvectors along with the question of finding an orthonormal basis in [2]- [5]. Another method to find a basis for the eigenvectors is the concept of matrices which commute with the DFT. The idea was introduced by Dickinson and Steiglitz in [6]. Later, Candan found a general form of commuting matrices in [7]. This theory then has been reviewed and generalized for other related transformations in [8]. The other method to construct the eigenvectors is the closed-form expressions in [9]- [11]. Recently, in [13], Hsue used a Lookup- Table method to describe the eigenvectors. For some special cases of N , the eigenvectors can be described nicely as computed. All these distinct methods not only affect the computational methodology of the linear Parseval identities (specially for large N ) but also investigate more suitable ones for practical applications. The eigenvectors (orthonormal basis) of the DFT are used to define the discrete fractional Fourier transform (DFrFT) [14]- [19] which has several applications in signal processing, communications and cryptography. Consequently, our linear Parseval identity opens a new door into the applications of the eigenvectors of the DFT. In this paper, we answer to the following questions: What are all the linear functions which are invariant under the DFT and which information can be derived from them? Are there any non-linear functions other than energy (Parseval's theorem) which are preserved by the DFT? What are the potential applications of these new linear and non-linear identities. The main contributions of this paper are summarized as follows: 1) We derive the linear invariant functions of the DFT, which are invariant relations between time signals and their transformed frequency signals under the DFT. These identities can be considered as linear versions of Parseval's theorem. 2) A novel algorithm to compute the linear invariant functions is introduced. The algorithm is modeled for a general case of transformations, so that it can be applied to any variant of the DFT such as discrete fractional fourier transform (DFrFT), discrete cosine transform (DCT) [20], discrete sine transform (DST) [21]- [22] discrete Hartley transform [20]- [23], generalized DFT (GDFT) [24], and even continuous versions (integral transforms) of these transformations. 3) Some interpretations of these identities are given, such as projection-preserving property. 4) Some potential applications are investigated. A novel invariant non-linear relation between average circular auto-correlation for time and frequency signals is introduced. Numerical examples are given. It is also shown that for certain type of signals the average circular auto-correlation functions of a time sequence and its transformed frequency one are both zero. 5) Finally, certain spectral analysis and boundaries for signals are highlighted. we use v T to denote a row vector. For a linear operator T , we denote the adjoint operator by T † . The complex conjugation of the vector v is denoted by v * . We use |z| for the magnitude of a complex number z and v for the magnitude of a vector v. The notation ·, · represents an inner product of a Hilbert space.

B. PRELIMINARIES
We consider a finite-length sequence of length N with DFT X[k]. The normalized N -point DFT and the inverse DFT are defined, respectively, as follows [25]: for k = 0, 1, . . . , N − 1 and n = 0, 1, . . . , N − 1. However, in this paper, vector and sequence notations are used interchangeably according to convenience. In the following, we denote where W e −j(2π/N ) . Then (4) and (5), using the notations given in (2) and (6), can be written in matrix form, respectively, as where F † is the conjugate transpose of F . From (7), the DFT can be viewed as a linear operator on the space of complex N tuples, F : C N → C N . Also, it is observed that F is an N × N symmetric matrix that is unitary, as F F † = I, where I is the identity matrix.

C. MATHEMATICAL MODEL
Let us assume the signal v is transformed to the signal V by a linear transformation T , as T (v) = V. In this paper we find some explicit linear relations between the signal v and its transformed V that eliminate the transformation T in this process. These relations will be given by finding all linear invariant maps ϕ where ϕ(v) = ϕ(V). In other words, if we consider the linear transformation T as a matrix, then the matrix equation T v = V indeed is a system of equations. Each invariant ϕ gives a linear equation by ϕ(v) = ϕ(V). Since the space of all invariants is a vector space, then its dimension gives the number of linearly independent linear equations in terms of v and V. In this paper, first we introduce an explicit algorithm to compute all such linear invariants. We show that we can find all the invariants of a linear transformation T by finding the eigenspace of the adjoint operator T † . Then, we will show how our general algorithm can be easily applied to the DFT as an special case.

III. MATHEMATICAL SOLUTION
In this section, since the set of all signals of finite energy forms a Hilbert space, we explain our mathematical methodology, based on techniques in Hilbert spaces, to find linear invariant functions for a given linear transformation, such as the DFT. If H is a vector space on the field of complex numbers C then its dual space H * is the space of all bounded linear functionals ϕ : H −→ C. For any linear transformation T : H −→ H, the adjoint operator T † : H * −→ H * is given by The results in this section are based on the Riesz representation theorem [26], which proves the Hilbert spaces H and H * are isomorphic, (see Appendix A). In this case, the property (9) will be equivalent to for all u, v ∈ H.

A. LINEAR INVARIANT FUNCTIONS
For a linear transformation T : H −→ H, a function ϕ : where v ∈ H is a unique eigenvector of T † corresponding to the eigenvalue λ = 1.
Proof. The proof of the following theorem can be found in Appendix A.
Theorem I not only finds a set of bounded linear invariant functions but also shows that these functions are unique. The invariant functions are given by the inner product of the Hilbert space. Note that the existence of an adjoint for an operator on a Hilbert space is guaranteed only if the dimension of the Hilbert space is finite. The theorem is formulated for a general case of linear transformations on arbitrary Hilbert spaces and can be applied even when the dimension of H is infinite. Then, as a special case, Theorem I will be applied to the DFT, which is a unitary transformation with finite dimension.

B. PRINCIPAL ALGORITHM
For a Hilbert space H, to compute the bounded invariant function ϕ for a linear transformation T : H −→ H, the following algorithm is proposed: This algorithm can be applied to several important transformations such as the continuous Fourier transform, where its domain is an infinite dimensional Hilbert space of certain functions. Note that, while any eigenvector v of λ = 1 gives a unique invariant function ϕ v , the number of such eigenvectors is infinite. Indeed the basis elements of the eigenspace of λ = 1 are the invariant functions.

IV. LINEAR PARSEVAL'S THEOREM FOR THE DFT
In this section, the proposed algorithm is applied to compute the linear invariant functions of the DFT, explicitly. In the algorithm, the operator T is the DFT matrix, and H = C n . Using the usual inner product of H we have All linear functionals on a finite dimensional Hausdorff vector space such as C n are bounded. Furthermore, any linear operator on a finite dimensional vector space has an adjoint. Hence, the DFT satisfies all the conditions of Theorem I and the proposed algorithm.

A. EIGENVALUES AND EIGENSPACE OF THE ADJOINT OF DFT
As an introduction for an explicit description of the invariant functions of the DFT, we need to recall the following existing results. For any N , all the eigenvalues of F are included in {1, −1, j, −j}, [1]. Based on the proposed algorithm, we need to find the eigenvectors of F † corresponding to the VOLUME 4, 2016 eigenvalue λ = 1. We note that F † indeed is the transpose of the complex conjugate of F . If λ is an eigenvalue of F with respect to the eigenvector v, then λ * is an eigenvalue of F † with respect to v † . This shows that λ = 1 is an eigenvalue of F † and its eigenspace corresponding to λ = 1 is the same as the one of F . Therefore, from Theorem I, some invariant functions exist for all N .

B. EIGENVECTORS OF THE DFT
In this subsection, we recall the existing results about the eigenvectors of the DFT. They will be utilized later to describe our main results. From Theorem 5.1 in [27], for N > 4, the dimension of the eigenspace corresponding to λ = 1 is N 4 + 1, (also see [4] and [5] for the multiplicity of eigenvalues). Since every eigenvector of F † gives an invariant function for the DFT, we have the same number of invariant functions. Using the spectral theory of the DFT [1]- [6], the following matrix P is formed, as By denoting the columns of P by v k 's, we have where for m = 0, 1, · · · , N/4 , where the notation ((k)) N denotes (k modulo N ) and δ[k] is the Kronecker delta function. Finally, the set of real vectors v 0 , v 1 , . . . , v N/4 forms a basis for the eigenspace of F corresponding to a unity eigenvalue.

C. INVARIANT FUNCTIONS OF THE DFT
Following the same steps, we can determine the linear invariant functions of the DFT. The following proposition is essential to deliver the results of Theorem I to our main result in Theorem II.
Proposition I. For an N -point DFT matrix, i) There are N/4 + 1 basis invariant functions denoted by ϕ vm which satisfy the following equation: ii) Any linear combination of these basis invariant functions, is also an invariant function of the DFT, where, Proof. Let v m be a basis element of the eigenspace of λ = 1. By Theorem I, the corresponding invariant function is ϕ vm (x) = x, v m . Since the inner product of C n is given by The second part of the proposition is the immediate consequence of the superposition property.

D. MAIN RESULT
We have found all the invariant functions of the DFT in the previous subsection. Now using Theorem I, Proposition I and the following invariant property we put the main result of this paper for the DFT in the following theorem.
Theorem II. For an N -point DFT, i) There are [N/4] + 1 basis invariant linear Parseval identities between a finite-length sequence and its DFT as follows: for m = 0, 1, . . . , N/4 . ii) All linear Parseval identities can be expressed as where the coefficients a m are arbitrary scalars.
Proof. Part (i) is the immediate result of Proposition I-(i) and Property (21). Part (ii) follows by Proposition I-(ii) and Property (21).
The main contributions of Theorem II are; 1) Eq. (22) is an invariant equation under the DFT, with respect to the definition of an invariant function, similar to the one used for Parseval's theorem. 2) Eq. (22) provides a set of unique linear invariant function, so there is no other one. In the following, there is another simple way that we can prove (22). If v m is an eigenvector of F † with respect to λ = 1 then using properties of the adjoint;

V. GEOMETRIC INTERPRETATION: PROJECTION PRESERVING
In this subsection, we interpret linear Parseval identities. The basis identities can be simply written as where v is an arbitrary eigenvector of the eigenvalue λ = 1.
Using (24), we can simply writes Therefore any linear Parseval identity states that the orthogonal projections of the time signal x and its transformed frequency signal X on any eigenvector v of λ = 1 are equal. This interpretation is also compatible with the general description of projections on Hilbert spaces which are defined on closed subspaces. It is known that the eigenspace of any eigenvalue of a bounded linear operator on a Hilbert space is a closed subset with respect to the norm. Specially the eigenspace of λ = 1 for the DFT is a closed subset of C n . There is no geometric picture for projections of complex vectors. To obtain a geometric interpretation of linear Parseval identity, we write (24) as follows where x R , x I , X R and X I are real vectors. In Figure 1, tangles formed by the eigenvector v and the real components x R and X R are denoted by α and α, respectively. Similarly, the angles between v and the imaginary components x I and X I are shown by β and β, respectively.
Note that x R = X R and x I = X I . As seen in Figure 1, x I v cos β = X I v cos β.
Hence, we obtain x I X I = cos β cos β .
When x is a real vector, according to Parseval's theorem Squaring both sides of (29 and using (33), we have Performing division of the left hand side of (34) and multiplying both sides by 1 cos 2 α we obtain 1 In other words, the angle between x R and v can be found from the information about the signal in the frequency domain.
Finally, using the identity which shows that the complex vector x − X is an element of the orthogonal complement of real vector v; or equivalently v is an element of the orthogonal complement of x − X.

VI. EXAMPLES
In this section, we consider the 4-point and 8-point DFT examples since highly efficient algorithms for computing the DFT, called the fast Fourier transform (FFT) [28], require to be a power of 2. In the following, we show how to find the linear Parseval identities for the 4-point DFT.

A. LINEAR PARSEVAL IDENTITIES FOR THE 4-POINT DFT
Let us consider the following matrix F for the 4-point DFT: The matrix corresponding P is given by The first two columns of P are the basis vectors of the eigenspace of F corresponding to the eigenvalue λ = 1, Even, these two vectors can be directly computed using Equation (16). Therefore, the following functionals form a basis for the space of invariant functions of the 4-point DFT:  From Theorem II, any linear combination of linear Parseval identities gives another linear Parseval identity. For example, from the following linear combination: we obtain the following new linear Parseval identity: Equivalently, Eq. (43) is a linear Parseval identity for the eigenvector [1, 0, 1, 0] T of λ = 1. This suggests that finding special eigenvectors of the DFT can introduce linear Parseval identities which may be more suitable for some applications, (see Refs. [13] and [29]).

B. LINEAR PARSEVAL IDENTITIES FOR THE 8-POINT DFT
Now consider the following matrix F for the 8-point DFT: where W = 1 Therefore, we obtain the following linear invariant functions: These give the following Parseval's linear identities:

VII. SIMPLIFICATION OF THE IDENTITIES IN THE FREQUENCY DOMAIN
As one of the properties of the DFT, when x[n] is real, Since all the eigenvectors v m of the DFT matrix corresponding to the unity eignevalue satisfy the equation which is real, according to (16), has the hermitian symmetry, as v m [k] = v * m [((N − k)) N ]. Even, this property can be simplified further as follows: Due to the importance of the three important properties of v m that we have discussed so far, they are summarized in the following corollary: Corollary I. Every eigenvector v m of the DFT matrix F associated with the eigenvalue λ = 1 has the following properties: (iii) Since v m is real by (i), then the hermitian symmetry gives the relation in (52).
Using the example given in Table 1 and Figure 2 for N = 8, it is straightforward to verify the results discussed in this section. As seen from Table 1, the eigenvectors v 0 , v 1 , and v 2 denoted by v 0 [i], v 1 [i], and v 2 [i], respectively, have all the properties listed in Corollary I. According to (52), to obtain the eigenvectors v m we need only compute the first N/2 + 1 values when N is even, and for the case when N is odd, the first (N + 1)/2 values are computed. Using this property, the frequency part of the linear Parseval identity (22) is simplified as follows (by considering the cases of N even and N odd separately:) (53) For the given real signal x and m = 2, the elements v 2 [1] and v 2 [3] of the basis vector v 2 are zero. Hence, in this case, the right hand side of the identity (53) can be further simplified, as Due to the hermitian symmetry that a real signal x has, the imaginary part of X has the following property: As seen from (53) and (54), when x is The property in (56) can also be verified using linear Parseval identities as seen in (28)   eigenvectors v m have more zero entries as m increases, so their computation is simpler.

VIII. BOUNDARIES FOR THE COEFFICIENTS OF THE IDENTITIES
The coefficients of the basis linear Parseval identifies are small numbers with certain boundaries. These boundaries are valid for the description of the basis vectors for the eigenspace of λ = 1 in this paper. Note that for another set  (58) From (58), the boundaries for the coefficients of the identities, which can be easily obtained, are given in the following lemma. Lemma I. All the coefficients v m [k] of the linear Parseval identities satisfy for k = 0, 1, . . . , N − 1 and m = 0, 1, . . . , N/4 .
As an example, these boundaries can be seen in Figure  3 and Table. 1. Note that these boundaries are applied to transformed signals without explicitly evaluating the DFT, and vice versa. Therefore, it is expected that this approach can significantly reduce the complexity of estimating the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals.

IX. APPLICATIONS
Although the main objective of this paper is to introduce the linear Parseval identities, we highlight some potential applications for these identities that relate time signals and their transformed frequency signals linearly and invariantly.

A. NEW NON-LINEAR PARSEVAL IDENTITIES
The linear Parseval identities obtained in this paper can potentially introduce more nonlinear invariant functions. Since there is no enough space to study them in this paper, here we highlight a few of them. The linear Parseval identities can be combined with the nonlinear (energy) Parseval identity to obtain new nonlinear invariant equations for a finite-length sequence and its DFT.

1) Average circular auto-correlation function
Using the first linear Parseval identity in Theorem II, we have From (16), after obtaining all the elements of v 0 for the N -point DFT, (61) can be written as follows: Now consider Then, we can expand (63) into the following form: where R x [m] and R X [l] are called the circular autocorrelation functions of x[n] and X[k], respectively, and defined as follows: In the sequel, based on (64), we consider two special cases.

2) Special cases
As an special case, for a finite-length sequence with zero direct current (DC) component, is simplified as follows: As the second special case, for a finite-length sequence with x[0] = 0 and zero DC component, (64) is rewritten as follows: can be further simplified, as 3) Example One of the properties of the circular autocorrelation function of a signal x is that it is a Hermitian function, This specially implies that R x [ N 2 ] is a real number when N is even. Therefore, the averages of the circular autocorrelation

4) Interpretations
In the previous example, the average circular autocorrelation for all terms is zero. Since the average circular autocorrelation of this type of signals, excluding R x [0] and R X [0], is the negative of the energy of the signals. This observation is proved as follows.
Proposition IX.1. Let x be a zero-DC signal, X[0] = 0. Then i) The average circular auto-correlation of x is zero, ii) The averages of both the circular autocorrelation functions of a signal, excluding R x [0] and R X [0], give the same result which is equal to the negative of the energy of the signal, iii) Furthermore, if x[0] = 0, the average of the frequencydomain circular autocorrelation function of the signal is also zero, Proof. See Appendix B.
In Figure 5(a), we consider a real sequence of length N = 256 with property x[0] = X[0] = 0. The magnitude of its DFT is shown in Figure 5(b). The time and frequency domain circular autocorrelation functions, shown in Figure 5(c) and 5(d), have the same average.

X. SIGNAL DESIGN
If v m is an eigenvector associated with the eigenvalue λ = 1, The eigenvectors of the eigenvalue λ = 1 form a set of signals which are not changing under the DFT. Therefore they can be used for the systems which the low computational complexity is the main objective. This is because their DFT do not need any computations. The entries of v are bounded between − 2 √ N and 3, as shown in (59). This shows that v m contains low power signals. Furthermore, the eigenvectors whose entries take on the value +1 or 1 as explained in [29] are of particular interest. They have flat power spectrum which can be used in coding and communications. As a result, the linear Parseval identities provide another attention to this special eigenspace (of λ = 1) which can be applied as new tools for design and analyzing and signals.

A. SPECTRAL ANALYSIS
Here, by giving some simple examples, we show that how these identities can be used for analyzing signals. Consider the 4-point DFT and assume that the time signals are real and positive; Using the linear Parseval identity (43), we have . This implies X [2] is real and (77) This provides an example of spectral analysis with boundaries for X [2]. As another example, consider the time-series (1,5,3,6). Without any frequency computations we can easily know that −10 ≤ X[2] ≤ 10. Since the linear Parseval identities are invariant equations, we can have the following boundaries for time signals. Now assume that X (78)

1) Special signals
The linear Parseval identities will be simplified for signals with special algebraic properties. Let us consider signals with zero DC. Therefore, using the identity (43), we have As an example, consider the signal x = [3, −2, 6, −7] T . Without any computation, we know that X[2] = 9. Here, let us explain another idea for spectral analysis of special signals using the proposed identities. If we apply Cauchy-Schwarz inequality (generalized case of complex numbers), on the identity (43), then we obtain For zero-DC signals, we can simplify the inequality in (80) to obtain an upper bound for |X [2]|, For example, consider the signal x = [2, −5, 4, −1] T . Then, the above inequality gives the following upper-bound, |X[2]| ≤ √ 40. Now, we summarize our discussion in this section as follows: 1) Linear equations are usually suitable for computation and analysis, since tools in linear algebra can be often applied. 2) In these identities, all coefficients are real numbers, which reduces computational complexity for signal analysis.  3) These identities are invariant, which means that the time-domain and frequency-domain parts are identical. This property is very useful, because if a result is obtained for the time signals using these identities, it also holds for the transformed frequency signals using similar arguments. 4) Signals with special properties most likely have simpler identities, which can be more suitable for signal analysis. For example, the binary eigenvector (1, 0, 1, 0) of λ = 1 provides a simple linear identity. These identifies can be useful for analyzing binary vectors (see [30]).

XI. CONCLUSION
In this paper we have showed that how linear invariant functions for the DFT can be computed using our novel algorithm. Our linear identities, as linear version of Parseval's formula, offer several new analytical tools to relate a finite-length sequence and its DFT. Since DFT eigenvectors are not unique, research on finding other eigenvectors with certain properties can assist in exploring other linear invariant functions for signal analysis in some applications. Our linear identities can potentially be used to study different problems for digital signal processing such as inverse problems and quantum computing. .

APPENDIX A A. PROOF OF THEOREM I
In this section, we prove Theorem I. The dual space of H, space of all bounded (or continuous) linear functionals ϕ : H −→ C is denoted by H * . One notes that the isomorphism H * ∼ = H is given by the map φ : H −→ H * where φ(v) = ϕ v and ϕ v (x) = x, v . The map φ is an isometry which its bijectivity and norm-preservation properties follow from the Riesz Representation Theorem.
Proof. According to the Riesz representation theorem, for any bounded linear functional ϕ : H −→ C, there exists a unique element v ∈ H such that Now we prove that v should be an eigenvector of T † . If the linear functional ϕ is invariant under the transformation T , then ϕ(x) = ϕ(T (x)). Consequently, VOLUME 4, 2016 for all x ∈ H. This means ϕ = T † ϕ. Using (82), we have We used Equation (82) in (84a) and (84c), adjoint property (9) in (84b), and (10) in (84d). Therefore, for all x ∈ H. This implies T † v = v which proves that v is an eigenvector of T † with respect to the eigenvalue λ = 1.
Conversely, assume there exists v ∈ H which ϕ(x) = x, v where v is an eigenvector of T † corresponding to λ = 1. The following computations show that ϕ is invariant under T : We used (10) in (86b). This shows ϕ(T (x)) = ϕ(x). In words, ϕ is invariant under T . ii) Using part (i); iii) It is an immediate result of the invariant Equation (69).