A New Separable Moments Based on Tchebichef-Krawtchouk Polynomials

Orthogonal moments are beneficial tools for analyzing and representing images and objects. Different hybrid forms, which are first and second levels of combination, have been created from the Tchebichef and Krawtchouk polynomials. In this study, all the hybrid forms, including the first and second levels of combination that satisfy the localization and energy compaction (EC) properties, are investigated. A new hybrid polynomial termed as squared Tchebichef–Krawtchouk polynomial (STKP) is also proposed. The mathematical and theoretical expressions of STKP are introduced, and the performance of the STKP is evaluated and compared with other hybrid forms. Results show that the STKP outperforms the existing hybrid polynomials in terms of EC and localization properties. Image reconstruction analysis is performed to demonstrate the ability of STKP in actual images; a comparative evaluation is also applied with Charlier and Meixner polynomials in terms of normalized mean square error. Moreover, an object recognition task is performed to verify the promising abilities of STKP as a feature extraction tool. A correct recognition percentage shows the robustness of the proposed polynomial in object recognition by providing a reliable feature vector for the classification process.


I. INTRODUCTION
Shape descriptors and features are considered substantial tools in computer vision applications, such as pattern recognition [1], face recognition [2], shot boundary detection [3], [4], and information hiding [5]. Moments, which are classified into geometric, continuous, and discrete types, have been utilized as a shape descriptor in the past two decades [6]- [8]. Geometric moments are nonorthogonal and cannot reconstruct signals; thus, they cause information redundancy. By contrast, continuous and discrete moments are orthogonal and can reconstruct signals (1D and 2D); therefore, they solve the problem of information redundancy. Discrete orthogonal moments (DOMs) are more favorable than continuous moments because DOMs alleviate computational complexity and discretization error [9]. DOMs are defined as the signal projection on orthogonal polynomial functions [8]. The Tchebichef polynomial (TP), Krawtchouk polynomial (KP), The associate editor coordinating the review of this manuscript and approving it for publication was Essam A. Rashed . and Hahn polynomial are examples of discrete orthogonal polynomials [10]. The TP shows remarkable energy compaction (EC) [7]; whereas KP outperforms TP in the ability to extract local features from images [11].
On the basis of the idea that the orthogonal polynomial can be generated by multiplying two orthogonal polynomials, different hybrid forms of KP and TP have been proposed. Jassim et al. [8] proposed the Tchebichef-Krawtchouk polynomial (TKP), Mahmmod et al. [12] presented the Krawtchouk-Tchebichef polynomial (KTP), and Abdulhussain et al. [13] recently proposed the squared Krawtchouk-Tchebichef polynomial (SKTP).
The TKP and their discrete transform coefficients (moments) have a remarkable localization property, but a special type of window for signal framing is required for signal processing [12]. The KTP and their moments perform well in signal compression [12]. However, KTP requires improvement in its EC property. SKTP was suggested to improve the results in terms of localization and EC properties. The SKTP is considered a second level of combination VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ of orthogonal polynomials (OPs). It is performed by multiplying two OPs, each resulting from the hybrid OPs (TKP and KTP) [13].
In this study, a mathematical analysis of the different combinations of hybrid OPs is performed. These combinations are discussed in terms of localization and EC properties. On the basis of the results obtained from the analysis, a new separable OP proposed.
This paper is organized as follows: Section 2 presents the mathematical fundamentals of TP, KP, and moment computations. Section 3 introduces the proposed OP. Section 4 provides the performance evaluation of the proposed polynomial. In Section 5, an object recognition system is used to evaluate the performance of the proposed polynomial. Lastly, Section 6 concludes the study.

II. PRELIMINARIES
In this section, the mathematical definitions of TP and KP are provided. Then, the moment computation of 2D signals (images) is presented.

A. TCHEBICHEF ORTHOGONAL POLYNOMIAL
The nth order of the scaled TP, T n (x) , with a positive integer N that represents the signal size is given by [7], [13], [14]: where n and x are the polynomial order and the signal length indices, respectively; and ω T (x) and ρ T (n) are the weight function and the squared norm of TP, respectively, which are defined as follows: where a b is the binomial coefficient, which is defined as a! b!(a−b)! ; and 3 F 2 (·) is the generalized hypergeometric function of TP and defined as: where (a) k is the pochhammer symbol (rising factorial) [6]. The Tchebichef polynomial coefficients (TPCs) can be organized in a 2D array with n and x parameters; thus, the three-term recurrence (TTR) algorithm can be used to compute the TPCs. The TTR algorithm is used to replace the hypergeometric and gamma function because TTR reduces the numerical instabilities and computation time [6], [14]. Different TTR algorithms have been proposed to compute the TPCs. In this study, the state-of-the-art TTR algorithm, which is proposed in [7], is used and given as follows: where a 1 , a 2 , b 1 , b 2 , and L X are computed as follow: This TTR can deal with a large signal size and has a low computational cost [7].

B. KRAWTCHOUK ORTHOGONAL POLYNOMIAL
The nth order of the scaled Krawtchouk polynomial, K n (x; p) is defined as [13], [15]: n, x = 0, 1, · · · , N − 1; p ∈ (0, 1) ω K (x) and ρ K (n) are the weight function and the squared norm of KP, respectively, which are defined as follows: By varying the control parameter (p) related to KP, the Krawtchouk moment has the capacity to extract features from any region-of-interest (ROI) in an image. [11].
TTR relation is used to compute the KPCs and thus reduce the computation complexity and maintain the stability of the Krawtchouk polynomial coefficients (KPCs). Different TTR algorithms have been proposed [11], [16]. In this study, the TTR algorithm, which was proposed by [15], is used. In this TTR algorithm, the KP plane is divided into four triangular parts bounded by primary and secondary diagonals, as shown in FIGURE 1. The computation of the KPCs is performed for one triangle only, and the other KPCs are computed using the symmetry relations.
The steps to compute the KPC coefficients are as follows [15]: 1) K n (0) and K n (1) are computed.
2) The KPCs in T1 (shown in FIGURE 1) are computed using the n-direction relation, which is given by: 3) By using the symmetry property around the primary diagonal, the KPCs in T2 are computed as follows: 4) By using the symmetry property around the secondary axis, the KPCs in T3 and T4 are computed as follows: 5) When p > 0.5 and to avoid zero initial condition value, the KPCs are computed as follow:

C. MOMENTS COMPUTATION
The moments are considered an efficient data descriptor because they prevent data redundancy. Consequently, the signals (1D and 2D) are characterized by the moments [8]. For the 1D signal I (x) with a length N samples, the moment η n can be defined as: where R n (x) is the OP. The reconstruction of the 1D signal can be computed as follow: For 2D signal I (x, y) with a size of N × N , moment η nm can be defined as: The reconstruction of the 2D signal can be defined as follows:Î x, y = 0, 1, · · · , N − 1 (28) The transform domain coefficients (moments) can be used as a shape descriptor for different types of signals [17]. In addition, the basis functions of OPs can be used as an approximate solution for differential equations [18].

III. THE PROPOSED SEPARABLE POLYNOMIAL
Mathematically, the combination of two orthogonal polynomials is orthogonal [8]. Different forms are generated from this mathematical point of view. The KTP and TKP are forms of the first level of combination, whereas SKTP belongs to the second level of combination. However, not all of combination forms can be used to satisfy the localization and EC criteria. For the first level of combination, TABLE 1 shows the forms that can be generated from the combination of KP and TP. In TABLE 1, Q K and Q T are the matrix representations of KP and TP. (·) T denotes the matrix transpose. KP (Q K ) and its transpose (Q T K ) are equivalent [13]. Thus, Q K ≡ Q T K . VOLUME 8, 2020 The localization property is applied as described in [13]. The second level of combination is considered to achieve improved results in terms of localization and EC. The combination of the forms shown in TABLE 2, which are KTP and TKP, produces different forms in the second level.However, not all of these forms can be used because some of the combinations produces the identity matrix. For the orthogonality condition, the polynomial, R n (x) , should satisfy the following [8]: Different relations for R n (x) that can satisfy (29) exist, as defined in the matrix representation as follows: where X and Y are the matrix forms of the OPs that resulted from combing two orthogonal polynomials TP and KP. With (30) applied, TABLE 3 summarizes the second level of combination and the different forms. The first four entries of TABLE 3 show the generation formula of SKTP, whereas the next eight entries result in an identity matrix. The remaining entries show a new formula that is investigated in this study.
On the basis of the mathematical analysis, only one form can be used to verify the localization and EC properties. The proposed polynomial is considered orthogonal because it is generated from multiplying two orthogonal polynomials (KTP and TKP). The new hybrid polynomial form of the nth order, R n (x) , can be defined as follows: where X n (x; p, N ) and Y n (x; p, N ) are OPs that resulted from combining TP and KP as follows: From (31), (32), and (33), the proposed polynomial can be expressed as follows:

THE MATRIX REPRESENTATION OF THE PROPOSED POLYNOMIAL
The polynomial can be expressed using a second form in the matrix form. In this manner, implementation becomes faster [19]. Thus, (32), (33), and (34) are represented in the matrix form as follows: R, X, Y, Q K , Q T for the polynomials R n (x), X n (x; P, N ), Y n (x; P, N ), K n (x), and T n (x) , respectively. The matrix representation is defined as follow: R n (x) represents the square of the multiplication of TP and KP. Therefore, the proposed polynomial is defined as STKP. This second level of combination leads to an EC higher than those of the existing hybrid forms discussed in Section IV-B. The generation process of STKP is illustrated in FIGURE 2. Moreover, FIGURE 3 shows the 2D and 3D plots of the proposed polynomial for N = 64 and different values of p. FIGURE 3 a, b, and c are the 2D plots for p = 0.5, 0.25, and 0.75, respectively, and signal order n = 0, 30, 31, and 63. These figures show that the moment and the signal indices are associated. In other words, the loworder coefficients of the polynomial for n = 0, 1, · · · , N /2 − 1 associated with the left half of the signal and the high-order n = N /2, N /2 + 1, · · · , N with the right half. Moreover, the polynomial coefficients shift depending on the variation of the p value. For the 3D plot ( FIGURE 3 d, e, and f), when p = 0.5 as n and x change in contrast (n = N /2, N /2 + 1, · · · , N , x = 0, 1, · · · , N /2 and vice versa), the values of the polynomial are low.

IV. EXPERIMENTAL ANALYSIS
This section discusses the performance of STKP in terms of two criteria, namely, localization (where the frequencies exist) and EC. Then, the performance is compared with the existence polynomials.

A. THE LOCALIZATION PROPERTY
A relationship between the time function and transform coefficients should be considered to improve the overall quality of the polynomial [8]. Different methods are implemented to test the localization in space property of the proposed polynomial.
The moment of STKP can be computed from the separable basis function. Two 1D steps on row and column are used in the computation because of separability [20].  FIGURE 4c). Therefore, the STKP coefficients shift as the value of p is deviated by p.
Another test for localization can be performed on the ability of the STKP to represent 2D images. Thus, four 256×256 test images are concatenated to form a single 512 × 512 test image, as shown in FIGURE 5. In addition, for p = 0.5, the test image's moment matrix is divided into four parts, namely, P1, P2, P3, and P4, as shown in FIGURE 6. The specific part of the moment is retained to find the ROI in the moment domain. The other parts are set to zero by masking them with a binary mask. The mask comprises 1's for the ROI and 0's for the remaining parts. To reconstruct the image, Eq. (28) is used, as shown in FIGURE 7 a-d. The first part of the image in region P 1 = I N /2−1 x,y=0 can be reconstructed by keeping P1 in the moment matrix and setting the other parts to zero by masking them. Thus, the mask values are 1 in the region where n, x = 0, 1, · · · , (N /2 − 1) and 0 elsewhere. Therefore, no extra computations are needed to find ROI as the symmetric relation between the spatial and moment domains. By contrast, KTP and TKP are demanded for further computations to find the desired part because the reconstructed image part is diagonally opposite to the moment part [8], [9]. The other benefit can be obtained when using the block processing because no additional index computation is needed. The four image quarters in FIGURE 6 are represented by the moment domain of the complete image. The moment can represent more images when the block size is reduced. For example, when the number of blocks is 4 × 4, the number of image blocks that can be represented by the moment is 64. This property is beneficial for the local feature extraction of each image part without requiring the block processing of the entire image.

B. THE ENERGY COMPACTION
The EC property is the ability of the transform to redistribute the signal energy to a few polynomial coefficients. As the coefficient number becomes smaller, an improved EC is gained [21]. The standard method to compute the EC of the polynomial is applied by using the first-order Markov model [22]. A stationary Markov sequence of the first-order zero mean with length N is used to find the moment energy distribution [13], [19]. The covariance function (CF) of the different coefficients (ρ) for the Markov sequence is the Toeplitz matrix, which has a constant element on the main sub-diagonals and is defined as [22]: The transformation of the covariance matrix (CF) into moment domain is accomplished using the following: where R is the matrix form of the OP. The variance of the transform coefficients (σ 2 d ) is denoted by the diagonal of ( ). In this study, two values of covariance coefficients are checked, ρ = 0.8 and ρ = 0.9 and N = 8. TABLE 4 illustrates the transform coefficient variance of DKTT, DTKT, SKTT, and STKT as a function of the diagonal index (d).
For DKTT and SKTT, the maximum values of variance are located in the middle of the coefficient vector d and decreases gradually to the edges, whereas the maximum values of DTKT and the proposed (STKT) are at the edges and decreases toward the center. From the reported results, the priority of the moment selection order (n) for STKT, which represents the significant signal position, is defined as For EC evaluation, the normalized restriction error (J M ) is defined as follows [22]: The descending order arrangement of σ 2 d is represented by σ 2 q . FIGURE 8 illustrates the restriction error for DTKT, DKTT, STKT, and STKT as a function for moment order (q) and computed for two values of ρ. From the point view of data 41018 VOLUME 8, 2020

C. IMAGE RECONSTRUCTION ANALYSIS
Image reconstruction analysis is performed to demonstrate the ability of a polynomial for feature representation by utilizing a minimal number of polynomial transform VOLUME 8, 2020  coefficients (moments). This section illustrates the image reconstruction analysis of STKP. A comparison is also performed to the Charlier polynomial (CHP) and the Meixner polynomial (MXP). The nth order of the CHP C n (x; a 1 ) is given by [20]: where ω C and ρ C are the weight and norm of the CHP, respectively, and are given by: The nth order of the MXP M n (x; b 1 , c 1 ) is given by [20]: n, x = 0, 1, . . . ; b 1 > 0, and c 1 ∈ (0, 1) (43) where ω M and ρ M are the weight and norm of the MXP, respectively, which are given by [20]: The functions of the CHP and MXP are orthogonal in an unlimited interval, which is [0, ∞].
In the experiment, an image of a cameraman is utilized for testing. First, the image is transformed into a moment domain using STKP, CHP, and MXP. Thereafter, the image is reconstructed using a limited number of moments.
In FIGURE 9, the CHP and MXP show a rippling artifact near the sharp edges (ringing effect). This rippling artifact is reduced for CHP and MXP as the moment order increases. However, the rippling artifact is still observed even when the moment order value is 256. The image reconstruction process using STKP shows improved results compared with CHP and MXP at the same moment order. Specifically, the artifacts are observed at the image boundary, whereas the visual content is clear at the image center. After the moment value of 128, the visual content of the image becomes clear, and a small amount of ringing effect can be observed. Therefore, the image reconstruction ability of the STKP is better than that of CHP and MXP.
For further elucidation, the normalized mean square error (NMSE) is utilized to measure the difference between the reconstructed (Î ) and original (I ) image. The NMSE formula is defined by [23]: The procedure is performed to compute the NMSE by gradually increasing the number of moments used to reconstruct the image. The result is shown in FIGURE 10 In FIGURE 10, the NMSE is plotted for the reconstructed image with a size of 256×256, and the parameter values used for the polynomials are as follows: 1) for CHP, a 1 is set to 128; 2) for MXP, b 1 and c 1 are set to 60 and 0.5, respectively; and 3) for STKP, p is set to 0.5. This figure reveals that the NMSE of the CHP has the highest error in comparison with those of MXP and STKP. The NMSE for MXP starts at an NMSE value of ≈ 2.5 and reduces to ≈ 7.3 × 10 −3 . Compared with the NMSE of CHP and MXP, that of STKP has the lowest reconstruction error, which starts from ≈ 0.1 and decreases to ≈ 0.02 at a moment order of 48. For moment orders greater than 48, the NMSE is decreased to zero when the moment order is 256.
To conclude, the STKP performance in terms of EC and reconstruction error is more convenient for image reconstruction than other orthogonal polynomials.

V. THE APPLICATION FRAMEWORK USING THE PROPOSED POLYNOMIAL
In this section, an object recognition system (ORS) is chosen to test the performance of STKP. The recognition process using features that are extracted via the STKP is performed. An SVM classifier is used for the classification process. However, a K-nearest neighbor (KNN) classifier is used for the classification process to verify the results. Then, a comparison is performed with different state-of-the-art polynomials, such as KTP, TKP, and SKTP, to evaluate the performance of STKP.

A. ORS IMPLEMENTATION
The fundamental objectives of ORS are to identify the objects in a given image and distinguish different kinds of objects [24]. FIGURE 11 illustrates the basic steps of the ORS model utilized in this study. Two phases are implemented in the recognition process, which are the training and testing phases. In the training phase, the ORS is trained with several images. The images are represented by related features. The image is transformed into the moment domain to extract distinctive features using STKP. Then, these features are used by the classifier to recognize objects. Therefore, the OPs (STKP) are first generated to extract the features. Thereafter, the object images I o of N × M dimensions are transformed into the moment domain as follows: where R 1 (N , N ) and R 2 (M , M ) are the generated polynomials for each image dimension. The moment (η) is considered a feature vector, and each feature vector is collective with the VOLUME 8, 2020 label of the object image ID. As the object is recognized by applying the classifier, the feature vector is specified as input to the classifier [25]. During the test phase, the test images are categorized by the classifier. The SVM, which is a machine learning technique, is used for classification. In this study, the SVM classifier is used for many reasons. First, SVM maximizes the margin between the separating hyperplane and data. Therefore, SVM minimizes the structural risk by controlling the out-of-sample error [26], [27]. By generating a hyperplane, SVM can separate the positive and negative images [28]. Second, SVM is well-suited for the recognition task and highly resistant to noisy data [27]. The package LIB-SVM with kernel function is utilized for SVM implementation [29], [30]. Using a suitable kernel function provides an improved description to frame information. The radial basis function (RBF) kernel is considered a reasonable choice because the nonlinear relation between class labels and attributes is addressed by this kernel [30]. For the correct prediction of testing data, the cross-validation procedure is applied to identify good parameters of the RBF kernel. The parameters that need to be tuned are the SVM penalty parameter (cost parameter (C)) and gamma (g), which is a RBF parameter. Five folds of CV with exponential growing sequences of C and g are applied. This CV considered a practical method to identify good parameters. The range of C from (2 0 , 2 1 , . . . , 2 5 ) and the range of g from (2 −15 , 2 −9 , . . . , 2 0 ) are chosen. The obtained CV accuracy is considered the percentage of data correctly classified. The pair of parameters with a minimum training error and optimized accuracy on the dataset is selected by iterating the (C, g) pair.

B. PERFORMANCE EVALUATION OF ORS BASED STKP
In this section, the performance of the STKP is evaluated. The Columbia Object Image Library (COIL-20) database [31] is used to validate the recognition process that is based  on STKP. This database, which contains 20 different objects with 72 grayscale processed images for each object, is widely used for classification. Therefore, the total number of images is 1,440 and each image of size of 128 × 128. The images are taken in different poses with a five-degree angle for each pose. FIGURE 12 the COIL-20 database images. The correct recognition percentage (CRP) is applied to evaluate the recognition accuracy. The CRP is defined as: CRP = number of correctly predicted subjects total number of subjects in testing set × 100% An experiment is performed for different training set (N Tr ) and fixed number of features (N f ) to find the optimal number of samples in N Tr . TABLE 5 shows the results for N f = 924, which represents 5.5% of the total features, and N Tr = 18, 22, and 25. The results are obtained for five runs. Each run comprises randomly selected samples for the training and testing sets. The parameter C is equal to 32. TABLE 5 shows that as the size of training set increases, the time required for the classification process is increased, and the classification accuracy is comparable. Therefore, in the next experiment, the number of samples in the training set is set to 18 (25%). Thus, the number of samples in the testing set is 54 (75%).
The number of moments is varied to find the moments that affect the accuracy and computation speed of the classification process. The following factors are reported: 1) best parameter values for SVM (C and g), 2) the time required for feature extraction (moments) and classification time, and 3) average value of CRP. The results shown in TABLE 6 are obtained after five runs. In each run, the samples for the  training and testing sets are selected randomly. In addition, the number of moments is selected from the center of the maximum energy moment for each varied polynomial such that the most energy of the moments is selected, as shown in FIGURE 13a. TABLE 6 shows that when the number of moments used as a descriptor decreases from 6612 to 924, the average CRP increases. In addition, the feature extraction time and classification time are decreased. However, when the number of features equals 684, the average CRP is decreased. Thus, the optimal number of moments is 924, i.e., 5.64%, and can be used for an improved CRP.
The robustness of STKP is verified by comparing the mean value of the CRP results with that of KTP, TKP, and SKTP. The more energy parts of the moment are chosen to form the feature vector and efficiently representing the images as shown in FIGURE 13. For STKP and TKP, FIG-URE 13a illustrates that the highest energy exists at the corners. For SKTP and KTP, the highest energy exists in the middle, as shown in FIGURE 13b. Experimentally, only 5.64% of the moments are extracted, i.e., 924 moments. Thus, the feature vector length is 1 × 900. The training and testing phases are executed (100) times to test the stability of the ORS. For the training phase, 18 images for each object are trained, and 54 images for each object are used for testing phase. In addition, the Gaussian noise with (σ 2 = 1% and 5%) and salt-and-pepper noise with (σ 2 = 10%) are added to the dataset images. The noisy images are utilized to show the capability of utilizing STKP in extracting features. FIGURE 14 illustrates the mean value of CPR for STKP, SKTP, KTP, and TKP for noise-free and noisy images.
The results show that STKP is considered a good tool for extracting features and performs perfectly irrespective of the noise type. Moreover, STKP achieves the highest CRP for noise-free and noisy images in the classification process, as depicted in FIGURE 14. For instance, the average CRP value is 98.36% when utilizing STKP as a feature extraction tool. The best CRP for other polynomials is 98.00%,which is when KTP is used. In addition, for noisy environments, the CRP value is 98.75%, when STKP is used for Gaussian with σ 2 = 0.01, whereas the best CRP for other polynomials is 98.27% when KTP is used for feature extraction. The results also show that STKP overcomes state-of-the-art polynomials for images degraded by Gaussian noise with σ 2 = 0.05 and Salt-and-Pepper noise. Therefore, STKP can be used for feature extraction because it shows more stable and remarkable results compared with the state-of-the-art polynomials.
The KNN classifier is utilized to confirm the results because of its simplicity and powerful capability [32]. The value of k stands for the number of nearest neighbors. For a classification task, the output of the KNN classifier is the class membership. The object is assigned to the most VOLUME 8, 2020 common class of the (k) neighbor. Therefore, the majority voting between the neighbor classes is used [33]. The value of k is empirically selected to obtain the optimal value of k; thus, the ORS is conducted for several times. Empirically, k = 1 is found to be the suitable choice for the ORS. In addition, the Euclidean distance is utilized in the ORS. FIGURE 15 illustrates the obtained results using the KNN classifier. The results are measured using the same procedure for SVM, i.e., 5.64% of the moment. The training set includes 18 objects, and the testing set incorporates 54 objects. The CRP is implemented 100 times through random object selection for the training and testing sets for noisy and noise-free images. The CRP for STKP shows improved performance for noisy and noise-free images. For example, the CRP value in a noise-free environment is 94.46% when STKP is utilized, whereas the best CRP for other polynomials is 94.62% when KTP is used for feature extraction. In addition, the average value of CRP for all noisy environments is 94.25% when STKP is used but 93.09% when KTP is used as a feature extraction tool. The attained results reveal that STKP outperforms the state-of-the-art polynomials.
The obtained results strengthen the performance of STKP for ORS application. However, FIGURE 14 and FIGURE 15 show that the results of ORS based on SVM (ORS-SVM) for noise-free and noisy environments are more remarkable than that of ORS based on KNN (ORS-KNN). For instance, for noise-free images, the CRP for ORS-SVM is 98.36%, whereas the CRP for ORS-KNN is 94.46%. In addition, the average result of noisy images is 98.17% for ORS-SVM and 94.25% for ORS-KNN. Thus, the results show that (1) STKP outperforms the state-of-the-art polynomials composed of TP and KP, and (2) ORS-SVM is more effective than ORS-KNN.

VI. CONCLUSION
In this study, a new separable polynomial and their moments based on the second level of combination of TP and KP are proposed. Compared with the existing hybrid forms (KTP, TKP, and SKTP), the proposed polynomial, STKP, achieves remarkable results in terms of localization and EC properties.
Compared with CHP and MXP, STKP achieves optimal image representation and reconstruction features. A comparative study is performed via an object recognition application using STKP and state-of-the-art polynomials. The experimental results of the ORS show that STKP has the best CRP values and an interesting feature extraction ability. STKP and its moments demonstrate superior performance and potential in signal representation and feature extraction, thus outperforming other hybrid forms. However, invariant moments can be used to improve the CRP. Thus, the invariant moment of STKP will be investigated in the future. In addition, different applications, especially for video content analysis, can be implemented to examine features by using STKP.