Smoothing Secant Line Slope Using Aggregation Fischer Burmeister Function

As some of the objective functions are piecewise, so they are non-differentiable at specific points which have a significant impact on deep network rate and computational time. The non-differentiability issue increases the computational time dramatically. This issue is solved by the reformulation of the absolute value equation (AVE) through a parametrized single smooth equation. However, utilizing a single smoothing function is less effective to produce a better curve at the breaking points. Therefore, this work formulates a new smoothing function of Aggregation Fischer Burmister (AFB) via amalgamating of two popular smoothing functions: Aggregation (AGG) and Fischer-Burmeister (FB). These functions are having the ability of minimum estimation from both sides of the canonical piecewise function (CPF). If an amalgamation of smoothing functions can affect the differentiability of the piecewise objective function, then amalgamating the AGG and FB smoothing functions will produce a smooth secant line slope on both sides with less computational time. To evaluate the proposed technique, we implement a Newton algorithm using MATLAB, with random initial values. A new smoothing function is formulated by firstly converting the piecewise objective function to CPF. Then, we applied it to the Newton algorithm. Finally, to validate the AVE difficulty of the new piecewise function, we perform one run for each initial value, and 30 runs for time evaluation. The experimental analysis verified that the proposed technique outperformed other techniques of AGG and FB individually in terms of the natural logarithm, exponential, and square root. Hence, this novel technique yields promising smooth approximation for AVE with less computational time.


I. INTRODUCTION
Piecewise functions are widely applied in various areas, such as image processing [1]. These functions are pieced together from a set of connected linear segments causing them to be neither continuous nor differentiable [2]. One of the most well-known piecewise technique is a canonical piecewise linear model [2], [3] which is appropriate for nonlinear modelling. This technique has fewer parameters and also need a lower computational burden. As the nature of this technique is piecewise; thus, its application becomes restricted when the derivative is required [2]- [4].
The conventional neural networks have shown significant performance on various issues over the years [5], [6]. Even though their functions in the recognition scheme were proven applicable, some of these nets like principal component anal-The associate editor coordinating the review of this manuscript and approving it for publication was Vivek Kumar Sehgal . ysis (PCA) and auto-encoder with various hidden layers are still enduring in handling selectivity-invariance dilemma and vanishing gradients issue respectively [6]. Despite becoming the most prevalent supervised learning technique, deep learning technique detaches complex aspects of input by generating multiple levels of representation [7]. Convolutional neural networks (CNNs) employ gradient algorithm, thus the learning of the network has potentially been trapped into the saddle point or local minima. Consequently, this implication effects CNNs performance significantly and thus initiates deep learning innovators in disentangling vanishing gradient and slow convergence learning issues, respectively [8].
As the objective function is considered as an essential component in CNNs, thus, the appropriate objective function should be utilized to avoid the network to be trapped into local minima [8], [9]. Some objective functions in deep CNN are piecewise in nature. Therefore, they are not continuously differentiable that leads to the bad performance of the deep VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ network. There are several signs such as corners, cusp, vertical tangent line, discontinuity, and other weirdness that show the non-differentiability of functions which can be recognized by viewing its graph. Many works addressed the non-differentiability issue by reformulating the absolute value equation (AVE) system through a parametrized smooth equation [10]. Among the existing smoothing functions, the Aggregation (AGG) and Fischer-Burmeister (FB) [10], [11] are considered as the most popular smoothing functions. These functions have been capable of estimating the minimum function from both sides to retain the original form of the piecewise objective function [12]. As the AGG and FB functions are utilizing logarithm and square root functions, respectively, they are being able to smooth any corner or breaking points by producing curve [13]. However, smoothing with a single square root and a single log function is less effective due to producing slighter curves [11], [12].
If an amalgamation of smoothing functions can affect the differentiability of the piecewise objective function, then amalgamating the AGG and FB smoothing functions will produce a smooth secant line slope on both sides with less computational time. Therefore, in this study, we aim to propose a new smoothing technique of Aggregation Fischer Burmister (AFB) to improve the non-differentiability issue of the canonical piecewise objective function by amalgamating AGG and FB. The amalgamation of these functions can express the smoothing in terms of natural logarithm [12], exponential [11], [12], and square root [11].
The other part of this paper is divided into five parts. The preliminaries of the essential concept of AVE as well as the impact of smoothing function to the non-differentiable functions are covered in Part II. The methodology is discussed in Part III. Attained results and the discussion are addressed in Part IV. Finally, the paper is concluded in Part V.

II. RELATED WORKS
This section covers the essential perception of AVE and the impact of a single smoothing technique to solve the nondifferentiability issue. In continuation of a pilot study by [14], many researchers studied the absolute value equation comprehensively [15], [16]. The standard form of AVE is as follow: where M is within the set of Rpxp and n is within the set of Rp.
Here, the |x| implies a trajectory with components of the value of absolute x where x is within the set of Rp. The (1) is not completely smooth due to the existence of |x|. Usually, partial smoothing arises from the optimization difficulties such as linear complementarity, mixed integer programming difficulties, quadratic programs, and biomatrix games which apply an AVE [16] as written in (1). The clarification for AVE is a nondeterministic polynomial-time problem (NP-hard) [17].
To overcome the issue of standard AVE (1), several different approaches are presented in the previous studies [15]- [18].
Newton Raphson technique is one of the most prominent ways to accurately and quickly approximate the root of function as well as solving equations numerically [18]- [22]. This technique is based on the concept of linear approximation [18] that uses straight-line tangent has been utilized to estimate the differentiability and continuity of the function. This technique estimates the value of the root of the function based on the value of the initial guess. The letter of [23] developed a second-order cone complementarity technique. They offered a standard regularized smoothing Newton technique is offered to overcome the issue of monotone second-order cone complementarity. They also claimed that the regularization parameter and the smoothing parameter are similar by definition. Furthermore, the standard regularized smoothing Newton algorithm can converge globally and quadratically in mild circumstances [23]. The analysis results specify that the proposed technique is efficient. The study of [24] utilized the smoothing Newton the technique to tackle the arising difficulty from an AVE which is allied with second-order cone (SOCAVE). They reformulate the SOCAVE, which is the extension of standard AVE by using the smoothing function and overcoming the iterative issue using the smoothing Newton algorithm. The SOCAVE formula is derived in (2).
where M , N ∈ R p∧p , and n ∈ R p are similar to those in (1); |x| in formula (2) represents the absolute value of x which is achieved via the root square of the Jordan product ''•'' of x and x. The mathematical formula of both standard AVE (1) and SOCAVE (2) are similar, but the main difference between these functions is the definition of |x| wherein standard AVE (1) means the component-wise |x i | of every x i ∈ R though, absolute x in SOCAVE (2) signifies the trajectory sufficient √ x 2 = √ (x • x) allied with second-order cone in Jordan product. Second order cone (SOC) is a spectral decomposition of x for x = (x 1 , x 2 ) ∈ R × R n−1 , as (3): where the p 1 (x) andp 2 (x) are two scalars of x. For every x ∈ Rn , the value of absolute x describes SOC wherein |x| is equivalent to a summation of positive and negative sides of x where is equal to the |x| = √ x • x. The combination of expression of x + and x − with (4), can be recast the absolute value |x| as the following: The p 1 (x) and p 2 (x) denote as spectral values of x where thev (1) x and v (2) x represent the spectral vectors of x. The smoothing newton technique overcomes the SOCAVE (2) issue [24]. Due to the differentiability issue of |x| for x ∈ R, a smoothing function can be utilized to substitute the nonsmooth function to be continuously differentiable. Hence, they define the function ∅ I (·, ·) : R 2 → R specifically as The smoothing function of [24] attains its originality from FB function [25]. Let a vector-valued smoothing function I :R × R n → R n , the combination of the spectral decomposition of x from (4) and the function I from (6) are redefined as below: In [24] after performing some proper settings, a quadratic convergence indicates the effectiveness of the smoothing algorithm [24]. Apart from that, the effectiveness of the proposed method [24] is evaluated based on two types of preliminary numerical comparisons. In the first stage, they compare the smoothing Newton technique against the generalized Newton technique in which the experiment results showed that both smoothing and generalized Newton algorithms are suitable to solve the SOCAVE. On the other hand, smoothing methods solve mathematical programming problems [26] intensively. The smoothing approaches are efficient to solve the optimization problems such as Nonlinear Problem Complementarity [27], and so on [28]. Furthermore, the study of [12] proposed a novel smoothing method for overcoming the nonlinear complementarity issue. This difficulty is to acquire a vector x ∈ R n such that The appropriate smoothing functions have a crucial role in attaining an approximate smooth issue of (7). Unlike present smoothing approaches, [12] proposed a semi smoothing function: ϕ: R_+×R^2 →R, and for all (µ,c,d)∈ R_+×R^2, It estimates the smoothing parameter, µ by calculating the minimum of each function only on the single side. This algorithm tackles the difficulties of the nonlinear complementarity using the Jacobian smoothing method [29]. The experiment results prove that the suggested technique can solve the NCP problem by the minimum number of iterations and produces higher accuracy compared to others. Besides, it is also robust to find the solution of large-scale test problems. It is also affected by choice of the initial value.
The piecewise functions have a simple structure due to their compact formula [2]- [4]. One of the most popular techniques of these models is the canonical piecewise function. These functions are partially differentiable at certain points due to having more than one equation. So, the application of these techniques is restricted while the differentiability of the function becomes interesting. Hence, there are several smoothing functions suggested in the literature [30], [31] to overcome the non-differentiability issue of these models. Having the ability of the estimation of the minimum function from both sides, the two functions namely AGG and FB are the most popular smoothing techniques [12]. The study of [31] has proposed a smooth piecewise model by substituting the local linear function into polynomial functions. They initially apply a generalization of canonical piecewise function into their polynomial function that yields to a better smoothing piecewise model. Their smoothing function successfully overcomes the derivative issue by achieving a non-zero state. On the other hand, the usage of the polynomial function is less applicable for modeling the real-world of device complexity. As the space of the input divides into a larger area, these functions are inclining to produce an oscillation state.
Another study [32] suggests a novel smooth function via smoothing the symmetric perturbed Fischer-Burmeister function. The proposed smooth function [32] can resolve second-order cone optimization (SOCO) issues. This approach merely solves one linear scheme of equations and just presents a one-line search at all iterations. The experimental analysis based on the proposed algorithm displays astonishing and comparable results in comparison to interiorpoint approaches.
In conclusion, there are several works [12], [24] that have been proposed recently to solve the non-differentiability issue of AVE and its extension SOCAVE by utilizing the single smoothing function. Those studies employed the Newton Raphson technique to estimate the differentiability and continuity of the function after applying the smoothing function. The Newton Raphson technique is capable of accurately and quickly approximate the root of function as well as solving equations numerically.

III. METHODOLOGY
This section divides into three subsections. At first, we described the principle and proposed smoothing objective function piecewise function. Secondly, we provide detail of applying the smoothing Newton algorithm based on the proposed smoothing objective function to solve the issue of AVE. Finally, we describe our proposed smoothing objective function based on the amalgamation of two objective functions.

A. THE PRINCIPLE AND PROPOSED NEW SMOOTHING PIECEWISE OBJECTIVE FUNCTION
The proposed smoothing objective function has three steps (as shown in Fig. 1): Step 1: Convert the piecewise objective function with the canonical piecewise function (CPF) to compact the formula with the minimum number of parameters.
Step 2: Amalgamate the AGG and FB (AFB) smoothing functions to propose the new smoothing function to overcome the non-differentiability issue of canonical.
Step 3: Build a precise smoothing Newton algorithm by proposed AFB smoothing function to solve the AVE issue in the piecewise objective function which makes the secant line slope of both right and left sides of the function to be equivalent and also reduces the computational time.

B. CONVERSION OF PIECEWISE OBJECTIVE FUNCTION TO CANONICAL FORM
Given any single-valued piecewise-linear function q(x) with utmost L number of corner points whereby E 1 < E 2 <· · · < E L , the equation is shown as below: where d and f i are scalars for i=1, 2..., L+1, where e and x are n-dimensional vectors. The J (i) is representing the slope of the i th linear segment in the piecewise function. The value of d, e, and f i are computed by (10), (11), and (12) respectively.
The objective function of this study is stated in (13). This piecewise objective function consists of four learnable parameters. It selects each parameter carefully to make the function applicable between the specified ranges. The two-parameter values of {t r , t l } are {0.4, −0.4} correspondingly. Unfortunately, these values cause the corner shape in the function which makes the function to be non-differentiable at these points. Additionally, the value of {a r , a l } are equal to {0.2, 0.2}. After substituting these values to the function (y) (14). It eventually simplifies the function as stated in (15).
Subsequently, converting y piecewise objective function of this study to the canonical form, we first have to obtain the three values namely slope, segments, and breaking points subsequently based on (15). The slope value J (i) of this function obtained from the coefficient of x which is equal to {0.2,1,0.2}, where the segment value is equal to {1,2,3} as this function consists of three pieces. The number of breaking points σ is obtained from the subtraction of segments value from 1 where it denoted by {t r , t l } which is equal to {−0.4, 0.4} . The slope value of this function is specified by {a r , a l } which is equal to 0.2, 0.2}. The value of {d, e, f i } for the objective function of this study achieved as shown in (10), (11), and (12) respectively.
The canonical form of a piecewise objective function of this study is derived as below:

1) APPLY AGG AND FB TO CANONICAL PIECEWISE OBJECTIVE FUNCTION
In this section of this study, the two popular smoothing methods of AGG and FB are utilized to make the functions of this study becoming smooth. The function J has two parameters namely l and x where the l parameter denotes for smoothing parameters and x represents the input variable. After replacing the absolute function of (8) with its equivalent smooth AGG approximation (22), let us infer the canonical technique for the objective function as shown in (23), and (24) subsequently.
Moreover, the function T has three parameters of (a, b, l) where the a and b denote as the absolute value of each breaking point and l denotes as smoothing parameter. By substituting the absolute-value function of (9) with its equivalent smooth FB approximation (25), let us derive the canonical model for the objective function as shown in (26), and (27) respectively:

2) THE PROPOSED (AFB) BASED ON AMALGAMATION OF (AGG AND FB)
Dealing with non-differentiable objective function due to the existence of AVE, it aims to propose the amalgamate smoothing function AFB by building a precise smoothing Newton algorithm. The AFB function produces equivalent limit value on both sides of the secant line slope. The aim of this study is to solve the non-differentiability of the piecewise objective. Therefore, we first obtain the value of segments, slope, and breaking points before converting (13) into the canonical form as shown in (21). It is inevitable that the breaking points have caused the non-differentiability issue of the piecewise objective function. Hence, we apply the canonical formula as shown in (21) only at those breaking points into the proposed amalgamated function (AFB) as shown (26). Tentatively, the secant line slope becomes smoother in the Canonical Piecewise objective Function compared to the initial piecewise objective function (13) as illustrated in Fig. 4.
According to [18], a smoothing Newton method is able to solve the non-differentiability issue of AVE (1). With similar motivation, we also employ the smoothing Newton algorithm and we observe that the secant line slope of the right and left side of the function to be equivalent. As show in Fig.3 the flowchart for smoothing of the non-differentiability of piecewise objective function based on the proposed (AFB) function in the smoothing Newton method. The proposed AFB function is based on the amalgamation of the two functions of AGG and FB from (21), (24). The steps of amalgamating the two functions are stated in (28) VOLUME 8, 2020 where the AFB is recast in (29). Fig. 4 shows the proposed smooth AFB function. In other words, the absolute function of |x| is linked with the canonical piecewise objective function. This study is consistently smoothing through the function AFB. Based on standard AVE as in (1), the function AFB is defined as a function W (l, x) : R n+1 → R n+1 as below: Then, we observe the following outcomes. Lemma 1 Let AFB be described through (31), afterwards, the subsequent consequences hold. (i) W (l, x) = 0 if and only if x overcome the issue of (1). (ii) W is constantly differentiable on R n+1 {0}, and once (l, x) = 0, the Jacobian matrix of W, at (l, x) is defined with

C. SMOOTHING NEWTON TECHNIQUE
In this part, the smoothing Newton algorithm is investigated through the smoothing function AFB (l,x) to tackle the AVE issue in (28) and represent the convergence of the used algorithm based on its properties. The standard smoothing algorithm stages are shown below.

1) SMOOTHING NEWTON ALGORITHM PROCEDURES
Step0: Set δ, σ = [0, 1], µ 0 R + , set x 0 to all vector spaces of n (R n ) , and set k 0 := (µ 0 , x 0 ). The e 0 is equal to {1,0} which is belong to a multiplication of a real number (R) to all vector spaces of n (R n ) . Let β > 1, therefore Step2: Calculate k z where it is equal to ( µ z , x z ) where it belongs to a multiple of a real number (R) to all vector space of n (R n ) though where W (·) is specified by (31).
Step3: Let α z stand for the greatest values of 1, δ, δ 2 ,. . . so that the norm vector of W (K z + α z k z ) is less and equal to Step4: Fix k z+1 completely equal to the summation of (k z , α z k z ) and z completely equal to z + 1. Return back to Step 1.

Lemma 2:
Let k z be built by procedure of smoothing Newton algorithm in 1. Then, the subsequent outcomes hold. (i) The { F (k z ) } and τ z sequences are uniformity reducing. (ii) τ 2 z is less and equal to βµ z which retain for each z. (iii) The sequence µ z is uniformity reducing, and µ z is greater than 0 for each z. To displays the evolvability of Newton (31), the following lemma is necessary.
Lemma 3: Assume that real symmetric matrices of S and T belong to R nxn , if the minimum single matrix value of S is extremely larger, in comparison to the maximum value of single larger, in comparison to the maximum value of single matrix T, subsequently the matrix S-T is certainly positive.
Assumption 1: The minimum matrix value of the A is extremely larger in comparison to the maximum value of the B matrix. The evolvability of Newton (31) is as follows.
Theorem 1: Let W and W be stated via (30) and (31), correspondingly. Supposing that Assumption 1 holds. Subsequently, W (l, x) is inverse at all (l, x) ∈ R×R n with l > 0.
Proof: Refer to (31), it is clear that the W (l, x) is inverse if A + B ∂AFB(l,x)/∂x . Supposing that there exists p is not equal to 0 such that A + B ∂AFB(l,x) p equal to 0. Then, where c completely equal to diag{ (l,x n ) 2 }. Based on the consequence 4.5.11 in [33], there exists a fixed ξ such that 0 ≤ λ min c t c ≤ ξ ≤ λ max c t c < 1 and λ max c t B t Bc = ξ λ max B t B . Together by the statement that λ min A t A > λ max B t B > 0, indicates that λ min A t A > λ max c t B t Bc . Therefore, it follows up via Lemma 3 that p t A t A p greater and equal p t c t B t Bc p which contravene (33). The proof is complete. Utilizing the Theorem 1 can be able to overcome the issue of Newton equations.
Lemma 4: Supposing that Assumption 1 holds. Afterwards, the sequence {k z } produced through procedure of smoothing Newton algorithm in 1 is constrained.
Proof: Through lemma 2 (iii), the sequence {µ z } is restricted clearly. Therefore, it only necessary to indicate that the sequence {x z } is restricted. Lemma 2 (i), followed by the sequence { W (k z ) } which is limited. This lemma together with (30), indicates that sequence , assuming that, with losing of popularity, that there exists a stable ζ greater than 0 such that Furthermore, for every z, it follows up through (35) that for every z, The assumption indicates that keep for every z. Therefore, the sequence {x z } is restricted. Thus, the proof accomplishes.
Theorem 2: Supposing that assumption 1 holds and the sequence {k z } is proposed through procedure of smoothing Newton algorithm in 1. Next to every accrual point of {x z } is an answer of (1). Prove via Lemma 4, presume, with no loss of popularity, that k z = k * = (l * , x * ) . Through Lemma 2(i), obtain W * := W(k * ) = lim z→∞ W (k z ) and τ * := min {l, W * } = lim z→∞ min{l, W(k z )}. Now, demonstrate that W * equal to 0. Assuming that W * greater than 0, which showed a conflict. In this instance, it follows out by Lemma 2(ii) that l * > 0. The proof is split into the subsequent cases.
Case1: Supposing that α z greater and equal to where α * greater than 0 for every z, where α * is fixed. Through (32), we have is less than ∞, which indicates that W(k z ) = 0. This against to W * greater than 0.
Case 2: Supposing that lim z→∞ α z is equal to 0. Subsequently, for every appropriate large z, α ∧ Z := α z /δ does not satisfy (33), for instance, . Thus, for each proper large z, it follows up with ( W(k z + αˆz k z ) − W(k z ) )/αˆz > −σ (1−1/β) W(k z ) . Meanwhile, l * is greater than 0, it follows up with W is continuously differentiable at k * . Let z → ∞, then the above difference provides Furthermore, via (32) attain as follow This equation composed by (35), signifies that −1 + 1/β ≥ −σ (1 − 1/β), which against the truth that σ belongs to (0, 1) and β is greater than 1. The compounding of case1 and 2, give W(k * ) = 0. Hence, the x * is a key of (1). The proof is achieves completely. To prove the superlinear local convergence of a smoothing algorithm, whole Jacobian metrics of function W should be nonsingular at key answer points. Lemma 5 display the assumption which keeps trivially for the difficulties of this study. Lemma 5: Supposing the assumption 1 keeps and k * := (l * , x * ) is a cumulation point of {k z } which produced through procedure of smoothing Newton algorithm in 1.
The O(k * ) of k * and a fixed q such that for every z: = (l, x)∈ O(k * ) by l less than zero, W (k) is not single value and (W (k * )) −1 ≤ q. Proof: The straightforward calculation returns the outcome of (1). While assumption 1 keeps, alike to the proof of Lemma 3, it is straightforward to get the outcome of (2). Through [12,Lemma 2.6], be able to attain the outcome (3). By utilizing Lemma 2(iii) and Lemma 5, in the same manner in [13, theorem 8], capable of achieving the local quadratic convergent of procedure of smoothening Newton algorithm in 1 as below.
Theorem 3: Supposing that assumption 1 keeps and k * : = (l * , x * ) is an accumulation point of sequence {k z } produced through procedure of smoothening Newton algorithm in 1. Then the complete sequence {k z } converge to z * with k z+1 −k z = ( k z −z * 2 ) and l k+1 = l 2 k .

2) PROCEDURE OF PROPOSED SMOOTHING NEWTON TECHNIQUE
The Newton Raphson is a technique that is utilized to estimate the real zeros value of the function by using the tangent lines. The solution of the function (Root) is zero of the tangent line. The procedure of obtaining the solution of the nonlinear equation based on the proposed smoothing objective function is as follows: In the first stage, we define the proposed AFB smoothing piecewise objective function (29). Secondly, get the derivative of AFB piecewise objective function. Thirdly set the initial value. Then set the maximum number of iterations (NumItr) in the fourth stage. Also, set the tolerance (tol) followed by saving the final iteration if tol is less than NumItr and finally, plot the solution.

IV. RESULTS AND DISCUSSION
As the piecewise linear functions application is limited when the differentiability of the function becomes important, generating a proper smoothing function estimator plays a significant role in gaining an estimated smooth issue [34]. Therefore, we suggest the amalgamation of two best smoothing functions of AGG and FB with piecewise objective function (13) of this letter to have more control in computational time and resolve the non-differentiability issue by making the secant line slope of this piecewise objective function (13) to be equivalent in both sides of right and left at specific breaking points of {−0.4,0.4}. In this section, the efficiency of the proposed AFB smoothing function compared with well-known functions of AGG [10], FB [25], in procedure of smoothening Newton algorithm in 1.
In assessing the performance of these three techniques, firstly, we utilized the smoothing techniques of AGG (24), FB (27), and proposed AFB functions (29) separately. Then compute the derivative of these smoothing functions. Thirdly as the Newton method may not converge so, the selection of the initial value has an important role in obtaining less error value [18]. So, we set the initial values of {1, 3,5,8,9,10,30,80,700} for each run as these values offer the low error rate. The value of the maximum numbers' iteration is set to 10 for three smoothing functions of AGG, FB, and proposed AFB functions. As the optimum value obtained while the error is lower than tolerance value so we set the tolerance value to 1e-6 for all functions. Therefore, for all nine initial values of each smoothing function, the newton method converges under the same number iteration. The µ value is utilized to control     the smoothness in all function domains, or it can defined as the certain value at the breaking points of {−0.4, 0.4}. In this work, the value of µ was selected as 0.002 as the µ >0 can offer better smoothing in each of the three smoothing functions. In order to show that the proposed smoothing function of this study has lower computational time compared to the other two mentioned smoothing functions, in this work we examine the processing times of the algorithm  for each initial value thirty times as these values are variant. Afterwards, calculate the mean for each of the nine initial values.
Moreover, in this study, we applied another four smoothing functions {∅1, ∅2, ∅3, ∅4} which are obtained from [19], [20] to demonstrate the superiority of proposed AFB smoothing function. The parameter setting of initial values, number of iteration, tolerance value, and the smoothing value of µ is set as the same as three smoothing functions of AGG, FB, and Proposed AFB functions.
This part is providing the numerical results of procedure of smoothing Newton algorithm in 1 for solving the non-differentiability of the piecewise objective function of this study (13). All experiments are conducted based on MATLAB codes. First, we demonstrated the comparison between the proposed technique with the other techniques of AGG and FB functions individually. The results of this study are in agreement with the numerical verification by amalgamating two smoothing functions. It also further emphasizes the importance of addressing the nondifferentiability issue of the canonical piecewise objective function. With regards to strengthening the effectiveness of our proposed AFB technique besides comparing with single AGG, and FB alone we also utilized another four smoothing functions of {∅1, ∅2, ∅3, ∅4} which are obtained from [19], [20].
The experimental result of this study verified that the mean error value for AGG and FB, proposed AFB, ∅1, ∅2, ∅3, and ∅4 functions are 2.60E-15, 1.58E-15, −3.70E-17, 1.23E-17, 7.89E-16, 5.14E-15, and −3.73E-12, respectively. Whereas the mean of computational time are 0.00019, 0.0001789, 9.2219E-05, 0.0030, 0.0036, 0.0026, and 0.0029, subsequently. Based on the obtained results, we can conclude that the proposed AFB technique outperforms the other recent six functions of AGG, FB, ∅1, ∅2, ∅3, and ∅4 in terms of computational time and error. This large reduction on mean error value is not unexpected, considering the fact that the proposed AFB is designed to address the non-differentiability issue by smoothing the secant line slope on both sides. In addition, there is also a significant reduction in the processing time as compared to AGG, FB, ∅3, ∅4, ∅1, and ∅2 technique respectively. Tables 1, 2 Table VIII, Fig.5 and Fig.6.

A. EVALUATION METRICS
Newton method is a technique that approximates the real zeros of the function by using the tangent line. The root value (solution) is zero of the tangent line. The solution of the function is shown in (37).
where c i+1 and c i are the solutions and initial value respectively, f (c i ) is the piecewise objective function of this study, and the f (c i ) is the derivative of the piecewise objective function. If the value of f (c i ) is equivalent to zero so the c i is an exact solution for f (c) = 0. So, the algorithm provides an equal value for c i+1 and c i . The error is utilized to discover the alteration among the obtained estimated value and the initial value that is anticipated. A smaller error value indicates that the smoothing function is more accurate and trustworthy. The error value is indicated in (38).
The parameters c i+1 and c i values are the same as the finding solution of the piecewise function.

V. CONCLUSION
We had studied the AVE linked with the piecewise objective function, which caused the non-differentiability in some particular points of the function. As the objective function plays a vital role in the performance of the algorithms, the construction of an appropriate approximate smoothing function is beneficial to obtain an approximate smooth problem.
Therefore, this study proposed a new AFB approach based on an amalgamation of smoothing techniques namely, Aggregation (AGG) and Fischer-Burmeister (FB) function. The aim is to have more control over computational time and make the secant line slope of right and left side of the function to be equivalent. These functions are able to estimate the least of the function from the left and right sides. Moreover, the proposed technique can be able to improve the performance of the algorithm by smoothing the non-differentiable points of the function.
The proposed amalgamation smoothing function with proper properties is effective to build a precise smoothing Newton algorithm to tackle the AVE issue in the piecewise objective function of this study. The numerical results of this work showed that the solution of the real zeros values of the objective function based on the proposed technique is closer to zero which outperforms the other six techniques of AGG, FB, ∅1, ∅2, ∅3, and ∅4.
ANAHITA GHAZVINI received the bachelor's degree (Hons.) in information technology (computer science) from Universiti Kebangsaan Malaysia (UKM), in 2013, and the master's degree in information technology (artificial intelligence) from UKM, in 2016, where she is currently pursuing the Ph.D. degree in computer science.
SITI NORUL HUDA SHEIKH ABDULLAH received the degree in computing from the University of Manchester Institute of Science and Technology, U.K, the master's degree in artificial intelligence from Universiti Kebangsaan Malaysia, and the Ph.D. degree in computer vision from the Faculty of Electrical Engineering, Universiti Teknologi Malaysia. Starting the career, she involved in conducting national and international activities such as Royal Police Malaysia, Cyber Security Malaysia, Cyber Security Academia Malaysia, Federation of International RobotSoccer Association (FIRA), Asian Foundation, Global Ace Professional Certification Scheme, MIAMI, MACE, and IDB Alumni. She is currently the Chairperson of Center for Cyber Security. Her research focuses are digital forensics, pattern recognition, and computer vision surveillance system. She has published two books entitled Pengecaman Pola or Pattern Recognition and Computational Intelligence for Data Science Application and more than 50 and 100 of journal and conferences manuscripts correspondingly.
MASRI AYOB received the Ph.D. degree in computer science from the University of Nottingham, in 2005. She was a member of the ASAP Research Group with the University of Nottingham. She is currently a Principle Researcher with Data Mining and Optimisation Research Group (DMO), Centre for Artificial Intelligent (CAIT), UKM. She is also a Lecturer with the Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM). She has published more than 100 articles at international journals and at peer-reviewed international conferences. Her main research areas include meta-heuristics, hyper-heuristics, scheduling and timetabling, especially educational timetabling, healthcare personnel scheduling and routing problems, and the Internet of Things. She has been served as a Programme Committee for more than 50 international conferences and reviewers for high impact journals.