A Normalized Adaptive Filtering Algorithm Based on Geometric Algebra

,


I. INTRODUCTION
Adaptive filters(AFs) are usually described via standard vector calculus and linear algebra [1]. Almost all existing filter algorithms iterate by minimizing the mean square error cost function [2]- [4], the least mean-square (LMS) algorithm is the most common filter algorithm [2]. However, the performance of the LMS algorithm is not optimal, the main drack of the LMS algorithm is that it is sensitive to the input signal, the NLMS algorithm is proposed to solve it by normalizing with the power of the input signal [5]. The NLMS algorithm is most popular due to its simplicity, but the stability is controlled by the fixed step size, since the step size affects convergence rate and mean square error, C.Paleologu propose a variable step-size NLMS(VSS-NLMS) algorithm [6], thus it solves the conflict between convergence rate and low excess mean square error associated with the conventional NLMS algorithm [6]- [10]. Another type of variable step-size algorithm is the regularized NLMS algorithm, Choi present a The associate editor coordinating the review of this manuscript and approving it for publication was Halil Ersin Soken . robust regularized NLMS(RR-NLMS) algorithm [7], which updates regularization parameter adaptively by using normalized gradient, the convergence rate has been improved. Although there are many improved algorithms based on NLMS algorithm and the convergence rate has been improved to some extent, the convergence rate is still relatively slow. Compared with NLMS algorithm, the filter algorithm based on high-order statistical has better performance. For example, least mean fourth (LMF) algorithm has a good performance than NLMS in terms of convergence rate and steady state error [11], but the LMF algorithm cannot guarantee stability when the input power of the adaptive filter increases. Therefore, the NLMF is proposed which remains stable relative to LMF algorithm [12], the NLMF algorithm has also been used successfully in many applications. In order to apply the NLMF algorithm to the problem of channel equalization with co-channel interference and additive white Gaussian noise, Zerguine et al. [13] propose the normalized LMF (XE-NLMF), but the calculation cost is relatively increased a little, and XE-NLMF algorithm is mainly for real-valued signal. Thus, Faiz and Zerguine and [14] propose the normalized VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ sign regressor least mean fourth (NSRLMF) adaptive algorithm for complex-valued and real-valued signal. However, NLMF-type algorithms have not been used to process multidimensional signals, to solve this problem, we introduce the concept of geometric algebra (GA), which represents the multidimensional signal as a multivector in GA space. In GA space, geometric algebra provides an efficient computing framework without using coordinate information, it simplifies the complexity of computation [15]- [19]. Recently, the LMS filter algorithm based on GA theory is proposed, GA-LMS algorithm has been applied for 3-D point-clouds registration problem and the 6 degrees of freedom(6-DOF) recovery transformation [20], [21]. GA-LMS algorithm has been widely used in signal processing, however, the convergence rate is slow. Compared with the GA-LMS algorithm, the GA-LMK [22] algorithm has improved the convergence performance and steady-state error, but it is sensitive to the input signal. Thus, the NLMF and NLMS solve the problem of sensitivity to input signals, they provide significant performance improvement in terms of the convergence rate, but they are not very good at processing multi-dimensional signals, which is the focus of this paper.
This contribution of this paper are as follows: First, taking the advantage of geometric algebra theory, GA-NLMS and GA-NLMF algorithm represent a multidimensional signal as a multivector, Compared with the original NLMS and NLMF algorithm, they process multidimensional signal in a holistic way. Second, the GA-NLMS is proposed for the processing of multidimensional signal, GA-NLMS algorithm minimizes the cost function based on the mean square normalized estimation error, and has a certain improvement in the convergence rate, but it has higher steady-state error. Therefore we also propose a normalized least mean fourth filtering algorithm based on geometric algebra, we prove that the proposed GA-NLMF algorithm has a great improvement in steady-state error and convergence rate than GA-NLMS algorithm and some existing algorithms, the GA-NLMF adaptive filtering algorithm may eventually become a powerful tool for multidimensional signal processing. This paper is organized as follows: Section II introduces the basis of geometric algebra and geometric calculus, and the original NLMF and NLMS algorithm. Section III introduces GA-NLMS algorithm and GA-NLMF algorithm. Simulation results are shown in section IV to analyze the performance of GA-NLMF and GA-NLMS algorithm. Finally, section V gives the conclusion.

II. GEOMETRIC ALGEBRA (GA)
A. THE BASIC OF GEOMETRIC ALGEBRA In this section, we review the basis of the geometric algebra(GA), let G n denote the geometric algebra of the n-dimensions space, and the G n is a geometric extension of the R n , which enables algebraic representation of magnitude and orientations [23]- [26], Geometric product is the most important operation form of GA space, the geometric product includes inner product and outer product. Consider vector a and b in R n , the inner product of vectors a and b can be defined as where |·| denotes the magnitude of the vector of a and b, φ is the angle between a and b, since formula (1) is a scalar, the inner product is commutative, i.e.,a · b = b · a.
The outer product a∧b is the product of the exterior algebra introduced by Grassmann's Ausdehungslehre [20], [27]- [29]. It is a parallelogram surface represented by two directed segments, such a surface can be interpreted as hyperplane when vector a is swept on the direction determined by vector b, the outer product is defined as follows where I a,b is the unit bivector that denotes the orientation of the hyperplane a ∧ b, since formula (2) is a vector and it is directivity, the inner product is noncommutative, Finally, the geometric product of the vector a and b is denoted by ab and defined as follows due to the outer product is noncommutative, the geometric product also is noncommutative, that is ab = ba.
In GA space, each orthogonal basis in R n generates 2 n members via geometric product [30], [31], we can denote a n-dimensions GA space by an orthonormal basis of vectors e 1 , e 2 , · · · e n , then, we can obtain the basis of the GA space as follows {1, e i , e i e j , · · · , e 1 e 2 · · · e n } For the R 3 case, G 3 has the 2 3 = 8 elements, with the basis of {1, e 1 , e 2 , e 3 , e 12 , e 23 , e 13 , I }, i.e., one scalar, three orthogonal vector e i , three bivectors e ij = e i e j and one trivector I = e 1 e 2 e 3 . In addition, a multivector A can be defined as follows [32] which is made up of its k-vectors( or k-grades) A k and k=0(scalars), k=1(vectors), k=2(bivectors), k=3(trivectors), and so on. GA theory has the ability to combine scalars, vectors, and super-complex quantities in a single element (multivector) [33]- [36]. Another example is provided to denote the reverse of a multivector A and defied as follows For example, for A = A 0 + A 1 + A 2 , then the reverse Noting that the 0-vector of the multivetor is not affected by reversion, so A 0 = A 0 , in addition, the scalar product (*) is defined as follows where the scalar product is the scalar part(0-vector) of the geometric multiplication between A and B.
Besides, an array of multivectors is made up of the general multivector. For G 3 case, given M mutivetors {U 1 , U 2 , · · · U M }, the array can be represented as follows [30] This work adopts the notion of matrix of multivectors: the elements of the matrix is made up of the multivector-valued. Similarly, we can give a mutivetors {W 1 , W 2 , · · · W M }, the definition of the multivetors w as follows Then, we can get the array product between them, the definition of the array product as follows (10) where the terms U i W i is the geometric product, and the results of u T w is a multivector. Similarly, the reverse transpose array is the extension of the reverse operation of multivectors to include arrays of multivectors. Therefore, we can get the following operation (11) where u * = { U 1 , U 2 , · · · U M }, it is the reverse version of the u. In GA space, we know that the array product between u and u* is represented by u 2 = u * u, it is a multivector. In addition, the distance is given by the magnitude of a multivector can be defined the scalar product, the form is as follows

B. GENERAL COST FUNCTION IN GA SPACE
Following the guidelines in [30], we can formulate the minimization problem in geometric algebra and the cost function as follows where D, P k , X , Q k are all general multivectors, and M is the number of the taps in the adaptive filter. By setting different values for the parameters in this formula, the cost functions can be obtained in different cases. In this paper, we select P k = U k ,X = 1,Q k = W k ,P k , Q k are the general multitivectors, then we can obtain the estimation error E = D − u * w, where u * w is the array product.

C. RELATED ALGORITHM 1) NORMALIZED LEAST MEAN SQUARE ALGORITHM
This section introduces a normalized least mean square (NLMS) algorithm [5]. Its cost function is based on the normalized mean square estimated error e, and its estimated error form is as follows where x k denote the input vector at the time k, and a k denotes the output vector at the time k and h k is the weight vector of the adaptive filter, e k is the error between the output signal and the expected signal. We can get the cost function from [5], the proposed algorithm is based on minimizing the cost function as follows By taking the derivative of the formula (15), the gradient of Then the corresponding gradient algorithm is h k+1 = h k + µ e k x k x k 2 , this is the proposed NLMS algorithm. We know that the normalization of the gradient algorithm is a division of the weight vector update term of the regular LMS algorithm by the mean square of the input norm. So the normalized weight vector update term remains bound as x k increases. The algorithm stability does not depend on the input signal power of the adaptive filter.

2) NORMALIZED LEAST MEAN FOURTH ALGORITHM
This section introduces a normalized least Mean Fourth (NLMF) algorithm [12], the derivation process is similar to NLMS algorithm. Its cost function is based on the normalized mean fourth estimated error e. From [12], the NLMF algorithm is based on minimizing the cost function as follows By taking the derivative of the formula (17), the gradient of J (h) with respect to h is Then the corresponding gradient algorithm is h k+1 = h k + µ e k x k x k

III. THE PROPOSED ALGORITHM A. NORMALIZED LEAST MEAN SQUARE ALGORITHM BASED ON GEOMETRIC ALGEBRA
In this section, we derive the GA-NLMS algorithm by using Geometric algebra and Geometric calculus (GC) theory [37]. The cost function is based on the mean square normalized estimation error E, we need to minimize the cost function, it is necessary for us to update the rules by computing the derivatives of multivector in the GA space. Thus, at instant i, yield an instantaneous cost function J (i) [38]- [40], so that where is the input vector signal, and D(i) is the output vector signal. Given E(i) and w i−1 , we can update the array of multivectors w by a recursive rule of the form [31] where µ is the adaptive filter step size, and h is an array of the multivectors related to the estimation error E(i). The design direction of the adaptive filter adopted in this work is opposite to the instantaneous gradient direction of the instantaneous steepest-descent rule [1], [39], [40]. So we can obtain h = −G∂ w J (w i−1 ), yielding the form of GA-based adaptive filter rules.
where G is a matrix with multivector entries, choosing a suitable can lead to different types of adaptive algorithm.
In this work, we setting G as an identity matrix.
In order to obtained GA-NLMS adaptive rule, we need to calculate a multivector derivative from the equation (21). we know that the differential operator ∂ w has the algebra properties of the multivector in GA space [40]. Therefore the gradient ∂ w J (w i−1 ) can be seen as the geometric multiplication between ∂ w and J (w i−1 ).
In GA space, we know that any multivector A can be decomposed into blades via [31] where A k is a scalar, and λ k , λ k , k = 0, 1, 2, · · · 2 n − 1 are the reciprocal basis of G n , we know that the following relation holds for the reciprocal bases:λ i · λ j = δ ij , where δ ij = 1, i = j, and δ ij = 0, i = j, similarly, it is easy to know that λ k , λ k comply with relation above(they are reciprocal basis). then we can use the formula (22) to decompose ∂ w into blades [31] where {λ l } is the basis of G n and ∂ w,l denotes the general derivative from standard calculus, which only affects blades.
Noticing that the formula (19) can be rewritten in terms of the scalar product as follows where u i is the arrays of multivector, its square power is a multivector, the denominator is the norm of a multivector vector, which is a scalar, E(i) denotes the error between the estimate signal and the actual signal, and it can be decompose in term of its blades via (22) In order to derive the GA-NLMS adaptive rule further ahead, we need to calculate the ∂ w J (w i−1 ), we replace the expectation of partial derivatives with the instantaneous estimate, then, simplify it as According to (25), we can obtain the following equation Therefore, plugging (23) and (27) into (26) results in We know that E(i) = D(i) − D(i), thus the estimation error can be defined as Thus, ∂ w,l E 2 s can be obtained by using the estimation error in (29) as follows since D s does not depend on the weight array w, so the ∂ w,l D s = 0. Similarily, the term D s is obtained by decomposing D i into its blades [31] which requires to perform the decomposition on u * i and w i−1 . Thus employing (5) once again, u * i and w i−1 can be decomposed into blades as follows where u T i,q and w i−1 are respectively 1 × M and M × 1 arrays with real entries, then plugging (32) into (31), we obtain From (33), we can obtain the following equation Then plugging (34) into (30), we can obtain Noticing that ∂ w.l w t will be different from zero only when l = t, thus, adopting the Kronecker delta function [31], ∂ w,l w t = δ t,l . Finally, substituting (35) into (28), the stochastic gradient is obtained According to the GAAFs literature, we can set the G equal to the identify matrix in (21), then yield the GA-NLMS update rule.
where µ is the step size of the adaptive filter algorithm.

B. NORMALIZED LEAST MEAN FOURTH ALGORITHM BASED ON GEOMETRIC ALGEBRA
In this section, we derive the GA-NLMF algorithm by using geometric algebra(GA) and geometric calculus(GC) theory, the cost function is based on the fourth power of the normalized estimation error E, we need to minimize the cost function, similar to the derivation of the GA-NLMS, we can obtain the rule for updating the GA-NLMF, thus the cost function J (i) [39], [40], so that Noticing that the formula (38) can be rewritten in terms of the scalar product as follows In order to derive the GA-NLMF adaptive rule further ahead, we need to calculate the ∂ w J (w i−1 ), we replace the expectation of partial derivatives with the instantaneous estimate, then, simplify it as According to (25), we can obtain the following equation Therefore, plugging (23) and (41) into (40) results in we know ∂ w,l E 4 s can be obtained by using the estimation error in (29) as follows: since D s does not depend on the weight array w, so the ∂ w,l D s = 0, then plugging (34) into (43), we can obtain Noticing that ∂ w.l w t will be different from zero only when l = t, thus, adopting the Kronecker delta function [38], ∂ w,l w t = δ t,l . Finally, substituting (44) into (42), the stochastic gradient is obtained According to the GAAFs literature, we get the GA-NLMF update rule.
where µ is the step size of the adaptive filter algorithm.

IV. SIMULATION ANALYSIS
We know that signal prediction is one of the core functions of adaptive filtering, therefore, in this section, we tested the performance of the proposed algorithm on one step ahead prediction. The experiment mainly conducts several comparative experiments, they are as follows. Part A, we choose 3D Lorenz attractor to test the performance of NLMS, NLMF, GA-NLMS, GA-LMS, GA-LMK [22],and GA-NLMF algorithms. First, we compare NLMS [5] with GA-NLMS on the step ahead prediction problem. Second, we compare NLMF [12] with GA-NLMF on one step ahead prediction problem. Third, we also compare GA-NLMS and GA-LMS [20] on one step ahead prediction problem. The fourth, we compared GA-NLMF with GA-LMK algorithms. Finally, we analyze the performance of GA-NLMS and GA-NLMF on one step ahead prediction problem. Part B, we choose 3D multiscroll attractor to test the performance of GA-NLMS, GA-LMS, GA-LMK, and GA-NLMF algorithms. Part C, we choose a complex value signal test the performance of GA-LMK and GA-NLMF algorithms. As with linear algebra, the steady-state error of a GA must be a scalar value. Therefore, in the experiment, in order to be able to compare the convergence performance of different algorithms and the size of steady-state error, we use the magnitude of the mean absolute error to compare the difference between the estimated signal estimated by each algorithm and the actual signal.

A. 3D LORENZ ATTRACTOR
The Lorenz attractor is a 3-dimensions system used to model atmospheric turbulence, lasers and so on, it can be defined as follow by using following differential equations [41]: we set the value of α, ρ and β as 10, 28, 8/3.

1) NLMS AND GA-NLMS ALGORITHM
In this experiment, the NLMS and GA-NLMS algorithms are used to track the 3-dimensional Lorenz attractor as a comparative experiment. The step-size of NLMS and GA-NLMS algorithms are chosen to be µ NLMS = 2*10 −2 , µ GANLMS = 10 −2 . Fig.1 shows the tracking performance of NLMS and GA-NLMS algorithms for different components of 3-dimensional Lorenz attractor. Fig.3 shows the signals of 3-dimensional Lorenz attractor are estimated by using NLMS and GA-NLMS algorithms respectively. From Fig.1, we know that the GA-NLMS can track each component of the signal more quickly than NLMS algorithm. In Fig.3, we can see that the estimated signal is closer to the actual signal by GA-NLMS than NLMS algorithm. The mainly reason is that in the framework of geometric algebra, a multidimensional signal is represented as a multivector, and the algorithm based on GA can process multidimensional signal a holistic way, and remains the relationship among each component of the multidimensional signal, the estimated signal obtained by GA-NLMS is closer to the original signal. Thus, GA-NLMS works better than NLMS algorithm.

2) NLMF AND GA-NLMF ALGORITHM
In this experiment, the NLMF and GA-NLMF algorithms are used to track the 3-dimensional Lorenz attractor as a comparative experiment, the step-size of NLMF and GA-NLMF algorithms are chosen to be µ NLMF = 2 * 10 −2 , µ GANLMF = 4 * 10 −3 . Fig.2 shows the tracking performance of NLMF and GA-NLMF algorithms for each components of 3-dimensional Lorenz attractor. Fig.4 shows the signals of 3-dimensional Lorenz attractor are estimated by using NLMF and GA-NLMF algorithms respectively.  From Fig.4, we get the estimated signal of the lorentz attractor closer to the actual signal by using GA-NLMF algorithm, therefore, the GA-NLMF algorithm is more accurate than the NLMF algorithm. From Fig.2, we can see that GA-NLMF algorithm can track each component of the 3-dmensional signal more quickly than NLMF algorithm as the number of iterations increase.
It is clearly that GA-NLMF algorithm has a better predictive performance than NLMF algorithm, The main reason is that NLMF for multidimensional signal processing, it deals with each component of the multidimensional signal, it's lost part of the multidimensional signal information, make the signal incomplete. GA-NLMF is different from NLMF algorithm, GA-NLMF represents a multidimensional signal as a multivector, which process multidimensional signal in a holistic way, and remains the relationship among each component of the multidimensional signal, therefore, GA-NLMF works better than NLMF algorithm.

3) GA-NLMS AND GA-LMS ALGORITHM
In this experiment, the GA-LMS and GA-NLMS algorithms are used to track the 3-dimensional Lorenz attractor, the step-size of GA-LMS and GA-NLMS algorithms are chosen to be µ GALMS = 0.8*10 −5 , µ GANLMS = 4*10 −2 . Fig.5 shows that the tracking performance of the GA-LMS and GA-NLMS algorithm for each component of the Lorenz attractor. Fig.8 shows that the estimated signal of the corresponding signal obtained by the GA-LMS and GA-NLMS algorithms.
From Fig.5 and Fig.8, we can know that GA-NLMS requires less iterations to track the actual signal than GA-LMS, and GA-NLMS tracks more accurately than GA-LMS algorithms. From [5], we know that the main drawback of LMS algorithm is that it is very sensitive to the input signal, and it is easy to be influenced by the input signal during the adaptive weight iteration, which makes the signal unstable. Although GA-LMS optimizes the LMS algorithm, enabling it to process multidimensional signals in a holistic way and ensuring the relationship among each components of multidimensional signals, its stability cannot be guaranteed with the increase of input signal. GA-NLMS algorithm is a variant of GA-LMS algorithm, which solves the problem by normalizing the input signal.

4) GA-NLMF AND GA-LMK ALGORITHM
In this experiment, the GA-LMK [22] and GA-NLMF algorithms are used to tracking the 3-dimensional lorenz attactor, the step size of their algorithms are chosen to be µ GANLMF = 4 * 10 −3 , µ GALMK = 10 −6 . Fig.6 shows that the tracking performance of the GA-LMK and GA-NLMF algorithms for each component of the Lorenz attractor. Fig.9 shows that the estimated signal of the corresponding signal obtained by the GA-LMK and GA-NLMF algorithms.  From the Fig.6, we can know that GA-NLMF algorithm can track each component of the 3-dimensional signal more quickly than GA-LMK as the number of iterations increase. The main reason is that the GA-NLMF algorithm is based on the normalized signal, which solves the sensitivity problem of the input signal and makes the algorithm more stable. In Fig.9, we can see that the estimated signal is closer to actual signal by GA-NLMF algorithm than GA-LMK algorithm. Besides, the learning curve of the GA-NLMF and GA-LMK algorithms is depicted in Fig.7. We can see that GA-NLMF has a faster convergence rate and a smaller mean absolute error than GA-LMK algorithm, that is to say, the performance of GA-NLMF algorithm is better than that of GA-LMK algorithm

5) GA-NLMF AND GA-NLMS ALGORITHM
In this experiment, the GA-NLMF and GA-NLMS algorithms are used to track the 3-dimensional Lorenz attractor, the step-size of GA-NLMF and GA-NLMS algorithms are chosen to be µ GANLMF = 4 * 10 −3 ,µ GANLMS = 4*10 −2 . Fig.10 shows that the tracking performance of the GA-NLMF and GA-NLMS algorithm for each component of the Lorenz attractor. Fig.11 shows that the estimated signal of the corresponding signals obtained by the GA-NLMF and GA-NLMS algorithms.
From Fig.10 and Fig.11, we can know that GA-NLMF requires less iterations to track the actual signal than GA-NLMS, and GA-NLMF tracks more accurately than GA-NLMS algorithms. From [42], [43], we know that NLMF has a faster convergence than NLMS algorithm, and NLMF is a higher-order statistic, making the weight update coefficient more robust, therefore the performance of NLMF is better than NLMS algorithm. Besides, the GA theory extends the NLMS and NLMF algorithm in GA space, allowes them to process multidimensional signals in a holistic way, and remains the relationship among each component of the multidimensional signal. Compared with GA-NLMS, each component of the estimated signal obtained by GA-NLMF algorithm is closer to the actual signal, when the number of iterations is small. Therefore, the GA-NLMF has a faster convergence rate than GA-NLMS algorithm, the GA-NLMF works better than GA-NLMS in tracking performance.
The learning curve of the GA-NLMF, GA-NLMS and GA-LMS algorithms for Lorenz attractor is also depicted in Fig.7, we can see that GA-NLMF has a faster convergence rate and a smaller mean absolute error than GA-NLMS algorithm when the number of iteration is small. In the case of small step size, the convergence rate of GA-NLMF is significantly faster than that of GA-NLMS, that is to say, the GA-NLMF algorithm solves the trade-off between steady state error and convergence rate in the adaptive filtering domain. In Fig.7, we can also see that the mean absolute error obtained by GA-LMS algorithm fluctuates relatively large and unstable, the mainly reason is that it doesn't   normalize the input signal and the weight update is affected by it. In addition, the GA-NLMF algorithm's cost function is based on the four power of the normalized error signal which contains higher-order statistics. Therefore, GA-NLMF also has a faster convergence rate and smaller mean absolute error than GA-LMS algorithm. Besides, compared with  GA-LMS and GA-NLMS algorithms, the performance of GA-LMK algorithm is improved because its cost function is based on the negative kurtosis of error signals containing higher-order statistics. Experiments show that the performance of GA-NLMF algorithm is better than that of GA-NLMS, GA-LMK and GA-LMS algorithms.

B. 3D MULTISCROLL ATTRACTOR
The Multiscroll attractors are widely used in secure communications [44], it is defined by using the following differential equations [45].
we set the value of the α, b and c as 40, 3, and 28.
In this section, GA-LMS, GA-NLMS, GA-LMK, GA-NLMF algorithms are used to track the 3D Multiscroll attractor, the step-size of GA-NLMF and GA-LMK algorithms are chosen to be µ GANLMF = 4 * 10 −3 ,µ GALMK = 10 −6 . Fig.12 shows that the tracking performance of the GA-LMK and GA-NLMF algorithms for each component of the Multiscroll attractor.
Fig13 shows that the estimated signal of the corresponding signals obtained by the GA-LMK and GA-NLMF algorithms. The learning curve of the GA-LMS, GA-NLMS, GA-LMK and GA-NLMF algorithms for Multiscroll attractor is depicted in Fig.14.
From Fig.12 and Fig.13, we can know that GA-NLMF requires less iterations to track the actual signal than GA-LMK, and GA-NLMF tracks more accurately than GA-LMK algorithms. In Fig.14, we can see that GA-NLMF has a faster convergence rate and a smaller mean absolute error than the other algorithms. The significant performance gain of GA-LMK algorithm over GA-LMS and GA-NLMS algorithms is because of its cost function based on the negated kurtosis of the error signal which includes high-order statistics, On the other hand, since the loss function of GA-NLMF also includes higher-order statistics, the computation is on the same order of magnitude, and the signal is normalized, it solves the sensitivity problem of input signal and makes the algorithm more stable. Although GA-LMK can process multidimensional signals in a holistic way, its stability cannot be guaranteed with the increase of input signals, and the weight update is affected by input signals. Therefore, we can see that the proposed GA-NLMF algorithm has a faster convergence rate as well as a smaller mean absolute that of GA-LMK. That is to say, the GA-NLMF outperforms the GA-LMK algorithm.

C. COMPLEX SIGNAL
In this experiment, we choose a complex value signal to track the performance of the GA-LMK and GA-NLMF  algorithms. The complex value signal is defined as y = e t+j20t . From [32], we know that the multivector structure of geometric algebra can be regarded as a generalization of complex numbers, thus, a complex value signal can represent a GA multivector. Fig.15 shows that the estimated signal of the corresponding signals obtained by the GA-LMK and GA-NLMF algorithms. From Fig.15, in the case of a small number of iterations, we can know that the estimated signal is closer to the original signal by using GA-NLMF than GA-LMK algorithm. GA-NLMF has a faster convergence rate than GA-LMK. Thus, we can conclude that the tracking performance of GA-NLMF is better than GA-LMK algorithms.

D. COMPUTATIONAL COMPLEXITY
In this section, we compared the computational of the proposed algorithms in this paper and the existing algorithms, the approximate time is calculated by using Matlab on PC with Intel(R) Core(TM) i5-4210U CPU@1.7GHz. Table 1 shows that the elapsed time of GA-LMS, GA-NLMS,GA-LMK, and GA-NLMF algorithms. From Table 1, the high elapsed time for the GA-LMK and GA-NLMF is slightly longer than the elapsed time for GA-LMS algorithm, the main reason is that the loss function of GA-LMK and GA-NLMF is more complex and contains high-order statistics. we can conclude GA-LMK and GA-NLMF algorithms can improve the steady-state performance of the the adaptive filter with some extra computational cost. Besides, the elapsed time of GA-NLMS is longer to that of GA-LMS algorithm, the mainly reason is that GA-NLMS increases the elapsed time by normalizing the input signal for each iteration. The elapsed time of GA-LMK and GA-NLMF is similar, because both of them contain high-order statistics, and the computation is on the same order of magnitude. In addition, since the GA-NLMF algorithm normalizes the input signal, the stability of the algorithm is guaranteed, that is to say, the stability of GA-NLMF is better than that of GA-LMK. Besides, we can also see from Fig.14 has a faster convergence rate and mean absolute error than GA-LMK, Thus, we can conclude that the performance of GA-NLMF is better than that of GA-LMK.

V. CONCLUSION
In this paper, two normalized adaptive filtering algorithms based on geometric algebra is proposed for multidimensional signal processing. GA-NLMS and GA-NLMF extend the original NLMS and NLMF algorithm from real domain to GA space. In GA space, a multidimensional signal can be represented as a multivector, which guarantees the possible relationship among each component of the multidimensional signal, however, the original normalization algorithm ignores the relationship between the components of multidimensional signals and loses part of the information, resulting in incomplete signals. Therefore, the performance of the GA-NLMS and GA-NLMF is better than the original NLMS and NLMF algorithms, the experiments also show that the algorithm based on GA is better than the original normalized adaptive filtering algorithm in steady-state error and convergence rate performance on one step ahead prediction problem. On the other hand, the proposed GA-NLMS algorithm is also better than GA-LMS in convergence rate performance, we know that compared with LMS algorithm, NLMS ensures the stability of the algorithm by normalizing the input signal. When the algorithm is extended to GA space, GA-NLMS performs better than GA-LMS algorithm, and the estimated signal obtained by GA-NLMS is more stable. Because NLMF is a high-order moments algorithm, it performs better than NLMS algorithm, therefore, the proposed GA-NLMF algorithm has faster convergence rate and lower and more stable steady-state error than GA-NLMS algorithms, GA-NLMF also better than GA-LMS algorithm. Besides, The error signal of GA-LMK algorithm also includes high-order statistics, which greatly improves the performance. However, compared with GA-NLMF algorithm, it does not normalize the input signal, and the weight update will be affected by the input signal. The stability of the algorithm is not as good as GA-NLMF algorithm. In the experiment, we also prove that GA-NLMF algorithm is better than GA-LMK algorithm.
Currently, we extend the original NLMF and NLMS algorithm in GA space, however, many other algorithms have not been extended to GA space. In order to better deal with multidimensional signals, we also hope to extend more adaptive filtering algorithms with good performance to GA space in the next work, and combine these algorithms with practical applications. On the other hand, we will continue to study the optimal adaptive step size and time-varying step size normalization algorithm based on geometric algebra, it is expected that the normalization algorithm based on geometric algebra can get more applications and become an efficient algorithm.