Efficient Translation, Rotation, and Scale Invariants of Discrete Tchebichef Moments

Translation rotation and scale invariants of Tchebichef moments are commonly used descriptors in image analysis. Existing invariant algorithms either indirectly compute from geometric moments or directly using Tchebichef moments. The former approach is relatively simple, but inefficient, especially when the system consists only of Tchebichef moments. Likewise, the latter approach is complicated, mainly because of the method used to formulate the invariant algorithm. Hence, in this paper, we introduce a new set of translation, rotation and scale Tchebichef moment invariants (TRSI) using moment normalization, which is much computationally efficient and accurate. This is achieved by formulating the recurrence relationship of the descriptors and successfully resolve uniqueness issues of principal axis normalization. Experimental studies show that the proposed method is computationally much faster and possesses higher discriminative power in classification when compared with present invariant algorithms. The main contribution of this paper is a novel fast computational algorithm that simplifies translation, rotation and scale invariant algorithms of Tchebichef moments and a novel normalization scheme that preserve invariants’ orthogonality from the moment functions. The technique can be deployed to derive affine invariants of Tchebichef moments, and invariants for other orthogonal moments like Krawtchouk, Hahn, Racah moments etc.


I. INTRODUCTION
Moments are commonly used shape descriptors in the field of image analysis. Extensive applications can be found in object recognition [1], [2] , characters recognition [3] , image watermarking [4], [5] , video processing [6] , medical imaging [7]- [9] , fingerprint recognition [10] , pesticides analysis [11] , image denoising [12] , etc. Hu [13] introduced geometric moments and established invariant properties related to the theory of algebraic invariants. However, geometric moments and their extension in the form of radial and complex moments [14] are regular non-orthogonal moments. Features generated by these moments will therefore suffer from information redundancy and sensitive to noise [15], [16]. Because of that continuous orthogonal moments like Legen-The associate editor coordinating the review of this manuscript and approving it for publication was Turgay Celik . dre moments, Zernike moments Gaussian-Hermite moments have been proposed and studied [16]- [19]. The continuous orthogonal moments store image information with minimum redundancy and hence image reconstruction is easy. However the moment basis functions are defined in specific continuous domain. The computation of orthogonal moments requires coordinate transformations and integral approximations. This has led to further computational complexity and discretization errors.
Hence, discrete orthogonal moment functions based on discrete Tchebichef polynomials are introduced by Mukundan et al. [20]. This is then followed by introduction of Krawtchouk moments [21] and Hahn moments [22] and several other family members of discrete orthogonal moments [5], [23]- [25]. Due to the orthogonal property of the kernel functions, information represented by discrete orthogonal moments are much more compact. In addition, such kernel functions are defined in discrete domain which allow moment functions to be computed accurately without numerical approximation. These have made the discrete orthogonal moments an ideal feature extractors for patterns represented in digital form.
An important property of moment functions is that they should be invariant to translation, rotation and scale which can lead to better performances in image analysis [10], [26]- [29]. The translation, rotation and scale invariants of discrete Tchebichef moments are formulated either directly from the moment functions or indirectly from other moments like geometric moments. The first translation, rotation and scale invariants for discrete Tchebichef moments using indirect approach is proposed by Batioua et al. [30] for 3D patterns. The method is based on invariants proposed by Yap et al. [21] for Krawtchouk moments. This indirect approach is simple to formulate and thus has become the most commonly used translation, rotation and scale invariants for discrete orthogonal moments on 2D and 3D patterns to date [21], [24], [25], [29]- [32]. However the computation of transformed central-geometric moments is relatively time consuming because of the appearance of complicated quantities and repetitive calculations in the expressions.
The rotation invariant of Tchebichef moments using the direct approach is first proposed by Zhang et al. [33] and Goh et al. [34]. Both algorithms eliminate spatial displacement and scale and rotational deformations by using respectively central-Tchebichef moments and moment normalization. Zhang et al. [33] removed rotational deformation using second order central-Tchebichef moments. On the other hand Goh et al. [34] eliminate these deformations by exploiting the geometric distortion properties. However, as both algorithms involved the decomposition of hypergeometric function and sequential computations, the invariant algorithms are therefore complicated and computationally intensive.
Recently, based on orthogonality properties of Tchebichef polynomials, Pee et al. [35] have successfully simplified anisotropic scale and translation invariant (ASTI) algorithm for Tchebichef moments which was initially proposed by Zhu et al. [36]. The features from the ASTI method is invariant to translation and scaling. However, the invariant algorithm does not work when images are rotated. Hence, in this paper we propose a formulation of a new set of translation, rotation and scale invariant algorithm for Tchebichef moments (TRSI) to address the issue. The invariant algorithm is not only more computationally efficient, generated features will also preserve orthogonality from Tchebichef moments. As such generated features will be more resilient to noise and possess better discriminative power for classification. In the empirical study, the proposed algorithm will be benchmarked against the 2D translation, rotation, and scale Tchebichef moment invariants using indirect approach (TRSI-ID) by Batioua et al. [30] and the invariant algorithm based on direct approach (TRSI-D) by Zhang et al. [33].
In Section II a brief overview on the discrete Tchebichef moments is given. This is followed by review of existing translation, rotation and scale invariant algorithms for Tchebichef moments using indirect and direct approaches. In Section III, a new TRSI algorithm is proposed for Tchebichef moments. Section IV gives an empirical verification to support the theoretical frameworks. Section V concludes the paper.

II. REVIEW ON TCHEBICHEF MOMENTS AND THE INVARIANT ALGORITHMS A. TCHEBICHEF MOMENTS
The n-th order discrete Tchebichef polynomial t N n (x) is defined as and the Pochhammer symbol (a) k = a(a + 1)(a + 2) . . . (a + k − 1). Tchebichef polynomials has the orthogonality condition where the square-norm ρ(n, N ) = (2n)! N + n 2n + 1 n = 0, 1, · · · , N − 1 (4) However as mentioned in [20] the polynomial t N n (x) grows at a rate of N n , and has caused numerical instability in the computation. The orthonormalized Tchebichef polynomial has thus been introduced to address the issuẽ The (n+m)-th order Tchebichef moment T n, m for an image f (x, y) having size N × N is defined as The three-term recurrence relation of Tchebichef polynomial,t N n (x), can be expressed as The recurrence formula (7) with initial conditions (8) and the symmetric property (12) of Tchebichef polynomial are commonly used to compute the basis of Tchebichef moments. However, cumulative errors in the computation will cause the algorithm to be unstable when the order becomes large. Hence several techniques have been proposed to address the issue [37]- [39]. whereṁ Here x p y q f (x, y) and (18) are respectively the geometric moment, and centralgeometric moments. The angle θ h in (16) is further normalized so that µ 2,0 ≥ µ 0,2 and µ 3,0 ≥ 0 to resolve ambiguities caused by principal axis normalization.

C. TRANSLATION, ROTATION AND SCALE TCHEBICHEF MOMENT INVARIANTS USING DIRECT APPROACH
The present translation, rotation and scale invariants for Tchebichef moments using direct approach TRSI-D is mainly based on the algorithm proposed by [33]. To improve the computational efficiency, we use algorithm proposed by [36] to compute central-Tchebichef moments and the falling factorial a n = a × (a − 1) × · · · × (a − n + 1). We next consider scale and rotation normalization. Although the existing algorithm does not come with scale invariant, it can easily be extended by incorporating the scale normalization so that the generated features are translation, rotation and scale invariants.
The (n + m)-th order of translation, rotation and scale invariant function, I trs-D n,m , thus has expression where and Here ω z denotes scale normalization parameter where normalized image has the mass value z .

III. PROPOSED TRANSLATION, ROTATION, AND SCALE TCHEBICHEF MOMENT INVARIANTS
In this section we will present the new translation, rotation and scale Tchebichef moment invariants (TRSI). The proposed TRSI is based on the technique of image normalization. In the following sub-sections, we will formulate recurrence relations for fast computation of the invariant algorithms. This is followed by the normalization scheme of TRSI.

A. FAST COMPUTATION USING RECURRENCE RELATIONS
Suppose the deformed pattern f (x, y) and normalized pattern g(x , y ) are respectively in image spaces N 0 ×N 0 and N s ×N s such that g(x , y ) = f (x, y) and

Definition 1: Affine transformed Tchebichef moments
The recurrence relations of affine transformed Tchebichef moments are given by the following theorem.
Theorem 1: The (n + m)-th order of affine transformed Tchebichef moment T am n,m is given by where λ n,m,k,j has two different recurrence relation expressions with respect to directions of row and column. and Proof: Consider the basis of T am 0,0 (g) using (8) After some simplification using (8) we get which implies that Similarly, fort N s 0 (x )t N s 1 (y ), we get From the three-term recurrence relation of Tchebichef polynomial (7), we havẽ By induction, (42) and (7), we get After some simplifications we finally havẽ By applying similar procedures we can get (31). The theorem is proved.
In the next section we investigate the characteristic of some Tchebichef moments when the pattern is at the center of an N × N image. This will later be used to derive normalization parameters for TRSI Tchebichef moments.

B. TCHEBICHEF MOMENTS FOR PATTERNS AT THE CENTER OF IMAGES
A pattern is at the center of a N × N image if its centroid is at the center of the image i.e.
It can then be easily shown that the centroid of a pattern in terms of Tchebichef moments has expression This gives us the following lemma.
Lemma 2: Let f denote the pattern in a N × N image, the first order Tchebichef moments if and only ifx We next investigate Tchebichef moments values of a pattern at the center of N × N image, rotated by angles ± π 2 and π about centroid.
Lemma 3: Suppose f is a pattern at the center of N × N image.
1) If g denotes the pattern f rotated by ± π 2 about the pattern centroid then 2) If g is the pattern of f rotated by π about the pattern centroid, then T 3,0 (g) = −T 3,0 (f ) (51) Proof: Given that g is the rotation of f by π 2 about its centroid, the moment T 2,0 (g) therefore have expression Since f is at the center of the image and using symmetric property in (12) we get Similarly for T 0,2 (g) = T 2,0 (f ) Thus we have Following the same steps, if g denotes the f rotated by angle − π 2 about its centroid, will lead to same conclusion as indicated by (50).
Similarly if g is the rotation of f about its centroid by π, we can deduce that we get (58). Lemma 2 gives the necessary and sufficient conditions that when the first order moments are equal to zero, the pattern centroid will be at the center of N s × N s image. Thus, the second normalization condition in (61) resolved the spatial displacements issues of deformed images.
We next find the angle θ. From Theorem 1 and the definition of I trs n,m given in (57) we can then deduce that By using the third normalization condition as stated in (62), thus, we get (59). However, the solutions (59) are not unique. The normalized angle θ has multiple solutions each separated by π 2 ×n, where n ∈ Z. Hence additional constraints (63) and (64) which have been derived using Lemma 3 are used to resolve the ambiguities. An angle π 2 will be added to θ if the constraint (63) is not satisfied. Similarly, an angle π will be added to θ if the invariant functions failed to satisfy the constraint (64). The theorem is proved.
There is a setback in the proposed normalization scheme. Due to the fact that the basis of Tchebichef momentt N s n (x) × t N s m (y) is symmetric at the center of image N s × N s , large numbers of odd-order invariant descriptors generated from symmetric patterns will therefore have values equal or near to zero. This is because patterns have been shifted to the center by the constraint (61). As such, classification tasks will be difficult, especially for symmetric patterns. Hence, skewed parameters are added to the TRSI model to address this issue.

D. SKEWED PARAMETERS FOR TRANSLATION, ROTATION AND SCALED TCHEBICHEF MOMENT INVARIANTS
Skewed parameters for TRSI are introduced to enhance the discriminative power of the moment invariants.
where a 1 = ω cos θ a 2 = ω sin θ, and the skewed parameters (t x , t y ) are denoted as the spatial displacements transformation in unit of pixel from the center of normalized image N s × N s .

IV. EXPERIMENTAL RESULTS
In this section, experimental results for the translation, rotation and scale invariant algorithms are provided. The proposed TRSI is benchmarked against TRSI-D by Zhang et al. and Zhu et al. [33], [36] in Sub-section II-C and TRSI-ID in Sub-section II-B. The first experiment evaluates the accuracy of the translation, rotation and scale invariants of the proposed model. The second experiment evaluates the computational performance of proposed invariant algorithm. Finally, the classification performance of the proposed algorithm is evaluated in non-noise and noisy environments.The accuracy of the algorithms is measured using relative standard deviations (RSD) in the percentage spread where µ and σ denote the mean and standard deviation of feature values, respectively.

A. EXPERIMENT ON ACCURACY OF TRANSLATION, ROTATION, AND SCALE INVARIANTS
In this subsection, we evaluate the accuracy of translation, rotation and scale invariants algorithms. A set of images with different complexity represented by groups of numbers, letters, symbols, Chinese characters and gray scale images are used in this experiment. Figure 1 shows   Table 1. RSD in percentile of invariant features up to 7 orders each calculated from 1,296 deformed images are recorded in the table.
RSDs of TRSI shown in Table 1 are mostly below 1%. This is not only due to accuracy of proposed normalization schemes, since the skewed parameters also play an important role in the accuracy of invariant features. With the spatial displacement caused by skewed parameters, most invariants including odd order features are deviated from zero for symmetric and non-symmetric images. This not only improves the computational accuracy, but it also enhance the discriminative power of these features. This is because since most of the features are non-zero, they are able to extract and represent more information out from the given patterns.

B. NUMERICAL COMPUTATION EFFICIENCY
In this subsection, we evaluate the numerical performance of the proposed algorithm. The algorithms have been implemented in Matlab 2014b, on Intel processor i7-7700HQ with 2.8GHz and RAM 32GB. A gray scale and a binary patterns ''cameraman'' and ''deer'' [40] shown in Figure 2    a total of 144 deformed images have been generated from the 2 patterns. For direct approach, Tchebichef moments are first generated from deformed images. This is followed by computation of invariant descriptors from Tchebichef moments to produce descriptors up to orders 5, 10, 20, and 30. The deformed and normalized images are to be identical in size N 0 × N 0 = N s × N s . The mass of normalized images for TRSI, and TRSI-D are = z = N 0 × N 0 × 0.18 and skewed parameters (t x , t y ) = (0, 0). For indirect approach, invariant descriptors of TRSI-ID are computed directly from the transformed central-geometric moments using (13 − 16). The average CPU elapse times in millisecond were recorded. Table 2 denotes the comparison of CPU elapse time between  TRSI, TRSI-D and TRSI-ID. Columns 3, 4 and 5 of Table 2 denote the total times required to generate invariant descriptors from an image using algorithms TRSI, TRSI-ID and TRSI-D, respectively. As shown in columns 3 and 4 of Table 2, TRSI reduced computation times of TRSI-ID by 74.5% to 87.3% when generating features from 100 × 100 images with orders are up to 5, 10, 20 and 30. When the image size is increased to 300 × 300, the computation time to generate the features is reduced when compared to TRSI-ID by 88.0% to 90.0% for orders no more than 5, 10, 20 and 30. Generally TRSI is more numerically efficient relative to TRSI-ID and will perform even better when the image size is larger. On the other hand, as shown in columns 3 and 5 of Table 2, TRSI reduced computation time of TRSI-D by 0.0% to 59.1% when generating features with orders up to 5 and 10. In addition, TRSI reduced  computation time of TRSI-D by 85.4% to 99.4% when generating invariant features with orders no more than 20 and 30. Hence, the performance of TRSI and TRSI-D is nearly identical when generating features up to order 5. However, TRSI is much numerically efficient compared to TRSI-D when generating higher order invariant descriptors. These show that TRSI is superior in computation performance when compared with the most commonly used translation, rotation and scale invariant algorithms for Tchebichef moments to date.
We next examine the discriminative power and noise sensitivity of the TRSI features.

C. EXPERIMENT ON OBJECT RECOGNITION
In this experiment, a set of Chinese characters as listed in Figure 3 are used as training set. Each training pattern size is 60 × 60 in the image space N 0 × N 0 = 120 × 120. The testing set is generated from training set by transforming the images with scale factors 0.8, 1.0 and 1.2, rotated by angles of 0 o , 20 o , · · · , 340 o and displaced vertically and horizontally by −2, 0, and 2. Excluding images similar to training images, the testing set consists of 9,700 deformed images. This is followed by adding of salt-and pepper noise around the deformed patterns with different noise densities. The noise area is about 1.25 times of the size of deformed patterns. Figure 4 shows some of the testing images contaminated by 10% of salt-and-pepper noise.
The following sets of features are used for recognition task where I n,m is the invariant defined in previous sections and V(3), V(5) and V(7) denote the selected invariant features up to orders 3, 5 and 7, respectively. In this experiment, Euclidean distance is utilized as the classification measure and is given as follows whereṼ s is the T -dimensional scaled feature vectors of the test sample, andṼ (k) t is the scaled training vector of class k. The scaled feature vectors are used mainly to remove the large dynamic range of the feature values. The scaled formula is as follows:ṽ The recognition accuracy,η, is defined as The classification performances of the proposed TRSI, TRSI-D and TRSI-ID are recorded in Table 3. As shown in  Table 3, TRSI-D(n) is lowest in classification performance. This is mainly because the principal axis normalization ambiguities have caused larger intra class variability among the generated features and therefore compromise the accuracy of the recognition system. To correct this we propose TRSI(nB). Similar to TRSI-D(n), the TRSI(nB) transformed the deformed patterns to image origin. In addition principal axis normalization ambiguities have been successfully resolved by Theorem 4. Hence, as shown in the graph of Table 3, TRSI-(nB) achieve much better recognition rate when compared with TRSI-D under low or non-noisy conditions. However the classification performance degraded much faster when we increase the noise level. This can be explained by the skew effect in the normalization process. According to our study, for Tchebichef moment invariants, when an image is skewed further away from the center of image, inter-class variability of invariant descriptors will be reduced significantly. As such features will be very sensitive to noise because a small deviation caused by noise will erode the feature's inter-class variability and hence cause misclassification. This is especially noticeable in higher order features. As such, we can see that, there is not much improvement or the classification performance gets worse when higher order descriptors are added as feature descriptors to TRSI(nB) or TRSI-D(n).

graph of
Since TRSI-ID(3) maps patterns to the center of image space the performance is therefore comparable or even better than TRSI(3A). However in higher order of TRSI-ID(n) where n = 5 and 7, the performance degraded significantly. This can be explained by the fact that transformed central-geometric moments like geometric moments, are sensitive to noise when the order of moment increases. Thus TRSI-ID(5) and TRSI-ID(7) suffered more due to noise compared to TRSI-ID(3) and hence compromise the classification performance.
The normalized patterns of TRSI(nA) are at the center of images. The inter-class variability of this feature set is wider and therefore has better in discriminative power. Furthermore, higher order features generated by TRSI(nA) are relatively less sensitive to noise when compared to TRSI-ID(n). This is mainly due to the normalized images are within the canonical space of basis functions and invariant descriptors are calculate directly from Tchebichef moments using direct approach. As such invariant descriptors will inherit some important characteristics of discrete orthogonal moments such as compact information representation and better noise resistance. Hence significant improvement of classification performance can be achieved by TRSI(7A) for non-noisy and noisy images where up to 7-th order invariant features are used in the experiment. The algorithm also allows us to fine tune the skewed parameters so that better accuracy can be achieved for the particular set of deformed images. As such a slight deviation from image center will further enhance features discriminative power and classification systems performance.

V. CONCLUSION
In this paper, a new translation and rotation invariant algorithm (TRSI) has been proposed for Tchebichef moments. The algorithm consists of a set of recurrence relations for faster computations of affine transformed Tchebichef moments and a new normalization scheme for the invariant. The proposed algorithm is computationally efficient to obtain the Tchebichef moment invariants when compared to existing methods. Furthermore, when the image size and order increases, the computational speed of the proposed method is a lot faster than existing methods. The features of the proposed TRSI are found to possess higher discriminative power with better classification performance in non-noisy and noisy conditions when compared to current invariant descriptors. The derivation can be extended to formulate affine invariants of Tchebichef moments or invariants for other orthogonal moment functions. The disadvantage of the proposed method is that since the fast computation is based on recurrence formulas, computation of higher order invariants can lead to cumulative errors with subsequent loss in accuracy.