Real-Time Knock Characterization Using Adaptive Filters and Power Estimators

We combined adaptive filters associated with power estimators to characterize the knock signal, obtained from a knock sensor, in an internal combustion engine. The filters were implemented using the automotive model-based design methodology, and the resulting software was embedded into hardware for a real-time evaluation of the proposed solution. The knock signals could be qualitatively identified in real-time, and thus have the potential to aid in the management of flexible-fuel engines. This approach has an extensive range of applications within the automotive industry, since it can be implemented within any model-based control strategy. For example, this methodology can be applied in commercial ECUs, currently used in most vehicles for knock detection, by simply eliminating an internal, dedicated, integrated circuit for knock identification, or by serving as a redundancy device (i.e: for safety purposes). Finally, this methodology can identify the knock signal, obtained from a knock sensor, just by using an algorithm implemented in the ECU’s processor, which we show is identifiable in real-time.


I. INTRODUCTION
Internal combustion engines are complex machines present in a large number of modern systems, such as automobiles, aircraft, heavy-duty machinery, and power generators. They require a control system that guarantees proper operating conditions in order to provide optimized power output, safety, fuel efficiency, and compliance with strict safety and environmental regulations. Internal combustion engines are generally managed by an electronic control unit (ECU), which must manage, in real-time, several input and output parameters.
In automotive applications, one of the key parameters in engine control is the spark instant, which ignites the air/fuel mixture. Optimization of the appropriate instant for the spark provides a maximum pressure over the piston, leading to a maximum torque output. Advancing the instant of the spark, with respect to the one in which the piston reaches the top position, i.e.: top dead center (TDC) inside the cylinder, is generally used to obtain maximum power output, (which occurs within 8 to 10 degrees after TDC).
The associate editor coordinating the review of this manuscript and approving it for publication was Huan Zhou .
There is a limitation to that strategy, however. If the spark occurs too early, the flame front of the ignited air/fuel mixture will reach the piston too early, (i.e.: before TDC, as shown by [1], it can be within 23 to 30 degrees before TDC depending on the engine and its conditions) and some parts of the mixture may self ignite generating a new flame front, which collides with the original one. This causes shock waves that could reach the natural frequency of the engine chamber, which would produce a strong vibration. This phenomenon, known as ''knock'', generates a characteristic sound. The occurrence of knock is an indicator that the optimal spark instant was not found, and it is necessary to reduce the spark advance to achieve optimal performance. The knock can also occur when there is a spontaneous self-combustion inside the chamber, which may be the result of an occasional spark due to sediments on the spark plugs [2]. The knock, with a characteristic strong mechanical vibration in a certain frequency range, is generally identified in automobiles with a knock sensor [3], which is a piezoelectric device that converts the knock vibration into electrical signals. Many techniques have been used to identify this phenomenon with a measurement of a noisy mechanical signal [4]- [6]. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ When the knock occurs, the electrical signal of the sensor is received and identified by an integrated circuit (IC) that forwards an analog or digital signal to a processing unit-either a microcontroller or a microprocessor. Then, that unit actuates to reduce the spark advance angle until the knock disappears. One of the problems with this detection technique (using a knock sensor) is that other noisy signals, that are uncorrelated to the knock but that have similar frequencies, could be present in the engine, which may cause a false-positive knock identification by the knock sensor. For this reason, only when the engine is under certain working conditions, which were prespecified during the development phase, does the ECU microprocessor consider the incoming signal from the IC for knock detection.
Several studies have been carried out to identify an optimal ignition advance to increase engine efficiency, and the knock characterization and detection is an important component to achieve it. Many studies have shown that the occurrence of knock depends on engine thermodynamics and the chemical composition of the fuel [7]- [10]. For example, one study found that the inlet air temperature combined with direct ethanol injection can be used to reduce knock occurrence and hence enable further advance of the spark timing [11]. Other studies have explored how such information could be introduced in a control framework by using integrated signal detection [12] based on either knock distribution estimation [13], [14], the distribution of the phenomenon's maximum amplitude of the pressure oscillation (MAPO) [15], or even integrating a model of knock occurrence using probabilistic approaches to tune a control framework [16]. The knock events have been modeled as probability density functions of the spark angle advance to determine the best threshold for knock avoidance. In this way, determination of the optimal threshold for the crank angle advance can be integrated into a control strategy [17].
The proposed threshold has been tested in a mock-up setup using model-based tools (ETAS INTECRIO) and Simulink, and a similar approach has been applied to pressure sensor signals by [18]. Probabilistic modeling of knock occurrence has also been applied by [19]. The authors used different input parameters related to the engine such as mixture fraction, enthalpy and temperature of unburned gases. Time-frequency analysis of the knock has also been extensively explored. It may be used as either a pre-processing tool [4], [15], [20], or in combination with pattern recognition techniques [21], given that the knock phenomenon presents a typical frequency range, although this range varies for different engine manufacturers [5], [22]. In a frequency-related analysis, an experiment [23] has utilized wavelets as a tool for denoising the knock signal and the empirical mode decomposition, which essentially extracts intrinsic mode functions of the signal of interest (i.e.: they consist of oscillation modes of time varying frequencies that can represent local characteristics of non-stationary signals. They can also be interpreted as an exploratory basis expansion of a signal) for knock detection in a mock-up gasoline engine. Frequency-based decomposition has also been used to clean up and characterize knock signals using a mock-up engine setup [24]. Additionally, some of the aforementioned methods have been used in embedded real-time framework [13], [14], [17]- [19], [23]- [25].
Since the knock detection is more reliable using an in-cylinder pressure sensor, most authors have used the knock sensor as the only source to identify it [14]. Those investigations focused on extracting metrics for knock detection based on in-cylinder pressure sensors [26]. Time-frequency has also been utilized to develop new metrics of knock detection, providing a binary output after analysis [27].
Here, we propose an alternative real-time software-based method to distinguish different characteristics of the knock occurrence. The knock phenomenon is characterized by signal processing techniques (adaptive and nonlinear filters) that assume signal acquisition from a knock sensor: a piezoelectric accelerometer, with one degree of freedom, which converts the mechanical vibrations of the engine's block to an electric signal. This software was implemented in an embedded hardware, in order to demonstrate its feasibility in real-time. If feasible, our approach could allow the elimination of the dedicated IC especially used for such function, in currently commercialized ECUs. This methodology also offers a redundancy device for the ECU (if the dedicated IC is maintained), which could be used for safety purposes. The proposed method can also distinguish between different intensities, which allows one to characterize the duration of the knock signal. The method was fine-tuned and validated in simulated knock scenarios using a modeled knock signal with additive white Gaussian noise (AWGN) [28]. We achieved good real-time characterization in such scenarios by combining adaptive filters (AF) with power estimators. The adaptive filter can emphasize knock occurrence by distinguishing knock statistics from background AWGN ones. The power estimator can smooth the filter output, providing clear evidence of the knock occurrence.
The main application is for engines with a flexible-fuel technology that works with any ethanol and gasoline mixture. In a vehicle with such technology, the relevance of knock characterization is even higher than in vehicles with pure gasoline, since the maximum power output (i.e.: torque) is highly dependent on the fuel mixture, especially when submitted to higher compression rates. A proper knock characterization eases the signal detection, enabling the ECU to manage precisely the proper ignition timing,which leads to an optimized power output for any fuel mixture. The knock characterization/detection strategy is only a small part of the whole engine management algorithm, which is comprised of several steps of modeling, integration, code development, calibration, and validation.
This paper is organized as follows: Section II presents the background used in this investigation, which includes the definition of the knock phenomenon and an introduction to the signal processing algorithms. Section III presents the methodologies used to implement the software pipeline; Section IV presents some details of this implementation; Section V presents simulations to evaluate the proposed algorithms under several scenarios; and Section VI presents the results in an embedded real-time system.

A. THE KNOCK PHENOMENON
The knock phenomenon is an abnormal combustion and represents an intrinsic limitation on engine performance. Therefore, the knock represents a limiter of torque exploration, which could allow further or minor torque request depending on several factors, such as the chamber geometry and fuel quality.
Several factors can cause the knock, which leads to a non-stationary electrical signal. When the knock occurs, it has an energy peak followed by a damping effect until it reaches the background noise level of the sensor. Therefore, we modeled the knock signal as the impulsive response of a second-order under-damped system with a background white noise. That damped sinusoid has a statistical pattern that is considerably different from white noise, which justifies the use of statistical filters for the knock characterization. In order to characterize this signal, we used three types of estimators: the power estimator, the Least Mean Square (LMS), and the Normalized Least Mean Square (NLMS). The last two estimators are stochastic estimators, also referred to as adaptive filters.

B. ADAPTIVE FILTERS
The adaptive filters used here are recursive stochastic methods derived from gradient descent and Newton's method. These algorithms find, at each iteration i, a vector of coefficients w i that minimizes the estimation error (also referred to as cost function) defined as: where d is the measured value (scalar random variable), E is the expected value operator, and J (w) is the cost function.
Here, random variables are represented as bold letters. The termd is an estimation of the measured value, defined as: in which w i is a column vector and u i is a random row vector, both of size M . All estimation problem aim to get a cost function to its minimal possible value. It can be shown that the minimum value of J (w) can be obtained with [29]: where R ud = Eud * is a row vector, R du = Edu * is a column vector representing the cross-covariance between the observation (or filter output) vector u and the input filter input d. Finally, R −1 u is the inverse covariance matrix, which is a square matrix of the observed values. The notation ''*'' refers to the complex-conjugate transposition and σ d 2 is the variance of the measured value.
The cost function minimal value, Eq. (3), is obtained by applying an optimal coefficient vector, denoted by w o , in the cost function, Eq. (1). The optimal coefficient w o is the solution to the linear estimator model, also known as the normal equation: If there exists an inverse to R u , the optimal coefficient vector can be obtained by: In an iterative/recursive form, and for a given cost function, the optimal coefficient vector can be approximated by using the steepest descent algorithm, defined as: where µ is the step size. The term ''approximate'' was used since the steepest descent convergence to w o is only achieved as i → ∞ and under certain choices of the step size µ, as discussed later. Therefore, for the algorithm in Eq. (6), w i is ''approximately'' converging to the closest possible value to w o , under a finite number of iterations and, consequently, w i is within a certain range from the minimum, which could be either local or global. By settingw i = w o − w i , which is the error distance between w i and the optimal vector of coefficients w o at instant i, and by assuming that R du can be represented by R u w o , as presented in Eq. (4), it can be shown [29] that:w where I is the identity matrix (of appropriate dimensions) and the equation inside the brackets serves to illustrate its connection with the convergence speed of the algorithm given in Eq. (6)-either from w i to w o or J (w) to J min . Each eigenvalue of I − µR u determines a convergence mode of Eq. (6), such that each convergence mode can be represented by 1 − µλ k , with k > 0 and k ∈ N. The smaller the convergence mode, the faster the convergence of w i towards w o and, consequently, from J (w) towards J min . The step size, µ, is intrinsically connected to the algorithm convergence speed and is bounded to R u eigenvalues because, as shown in [29], µ must ensure convergence, with: An indication of the existence of Eq. (8) is that the eigenvalues of 1 − µλ k are within the unitary circle.
The vector of coefficients w can also be obtained from the Newton's method [29], which is derived from the steepest descent algorithm, defined as: where is a small positive value for adjustments. The Newton's method, unlike the gradient descent, does not depend on values of R u to make the error,w, converge to zero, The method only depends on the step VOLUME 8, 2020 size value, µ. As shown in [29], if the condition 0 < µ < 2 is satisfied,w → 0 is guaranteed for this algorithm, but again under the premise that i → ∞. Newton's algorithm also differs from gradient descent in the number of convergence modes. Unlike gradient descent, Newton's method has only one convergence mode, which is for a quadratic cost function. The convergence mode depends on the step size and is defined by: for the given cost functions. Since Newton's method reaches the optimal value of w o faster than gradient descent, the minimal estimated error is also reached faster. In real applications, the statistics of the signal of interest are rarely known, even though they are required by the steepest descent algorithm (because R du and R u are deterministic. Therefore, the solution presented in Eq. (4) cannot be reached in practice. Nonetheless, a few stochastic algorithms were created based on the steepest descent algorithm. Such algorithms were conceived under stochastic approximations of R du and R u to outline no knowledge on signal statistics. These approximations are given by [29]: where u i represents a vector of M measurements prior to the i } is a regressor vector and d(i) is a scalar, which is the value of d at the instant i. The approaches based on (11) generate the algorithms used in this paper, namely the LMS and NLMS.

1) LEAST MEAN SQUARE (LMS) ALGORITHM
The LMS algorithm is based on the gradient descent algorithm presented in Eq. (6). After the application of stochastic approximations, defined in Eq. (11), the following algorithm is obtained: The LMS inherits all of the characteristics of the gradient descent algorithm, such as a high dependency on input values for convergence. On the other hand, this algorithm is computationally simple and easy to implement.

2) NORMALIZED LEAST MEAN SQUARE (NLMS) ALGORITHM
The NLMS algorithm is derived by applying Eq. (11) to Newton's method from Eq. (9). With a few algebraic manipulations [29], one obtains: Just as the LMS inherits the gradient descent characteristics, the NLMS inherits the characteristics from Newton's method such as the fast tracking of the signal to be estimated, which is a result of the convergence rate depending only on the step size.

C. ADAPTIVE PREDICTION
When there is a narrow band signal, s(i), embedded in a broadband signal, v (for example, white noise), where i is the iteration or temporal realizations, adaptive predictors can be used to separate them [30]. An adaptive predictor (AP) is comprised of a filtering structure, usually a transverse finite impulse response (FIR) filter, and a learning algorithm. At instant i, a FIR filter is represented by its response to the impulse (truncated) captured in the column vector w i .
In order to separate s(i) from v(i), the AP tries to predict the trend of the reference signal, represented by u(i), through a linear combination of previous samples, i.e.: More explicitly, the linear combination of past samples is represented by: which is the line regression vector, and w i−1 is the response to the predictor filter impulse at instant i − 1.
The vector w i−1 is designed to minimize the prediction error: The optimal predictor, w o , can be found using the normal Eq. (4) [31]: where R u Eu * u and R * du Ed * u are the signal statistics, as shown in the previous section. The index i indicates an iterative calculation of w o for each R u , R du at each time i. In the prediction case, d ← u(i) and u ← u i−1 .

D. POWER ESTIMATOR
It is well known that the energy E of a signal x(i) can be defined as [32]: (14) and the definition of the instantaneous power p(i) is given by [32]: The application of the formula in Eq. (14) is not possible because the sum is infinite. A more practical algorithm would be y(i) = y(i − 1) + x(i) 2 , which calculates the energy produced since i = 0, but this algorithm is unstable. A better choice would be: in which 0 < α < 1, and α = 1 − β. If β is close to one, y(i) is approximately the instantaneous power. If α is larger than β (which should not be zero), the main pattern shown in the realization of y will be the developed attenuated energy of the applied signal x(i). That is, the signal y(i) is more energylike. The non-linearity (power two) attenuates relatively small values of x(i). Nevertheless, one can realize that this algorithm, in principle, may not be so useful for knock events with amplitude values close to the sensor background values or its noise. Lastly, from Eq. (16), the higher the α the more energy emphasis is given to the estimator and, consequently, the lower the α the more emphasis is given to instantaneous power. Therefore, the higher the α, the slower the response of the power estimator to changes in the signal x(i). So, for a faster response, α has to be small. We propose a method in which the adaptive filtering is combined with power estimation, in order to distinguish the different knock levels as well as the different time duration of knock events.

III. THE V-MODEL DESIGN AND IMPLEMENTATION SCHEME
In order to better explain how the contributions of this paper were obtained and validated, we explain in this section the design paradigms, known as the ''V''-model and model-based development, and in section IV we explain how they were used in this research. The design and implementation of a complex system, such as the one developed here, can be divided in many ways. Typically, after the specification of requirements, a logical architecture design, composed of function blocks that interact with each other, is done. Then, the technical system architecture (hardware and communication infrastructure) is proposed. Those steps form the system development phase. After that, the software development phase begins, in which the requirements are translated from the system development phase, the architecture is proposed, and software components are designed and coded (see the central part of Fig. 1). Integration of components, acceptance tests, and verification and validation must be performed. The V-Model system development methodology, shown in Fig. 1, represents a way to organize the development cycle such that the entire system life cycle can be more efficient and controllable. The left side of the ''V'' corresponds to the design process, in different abstraction levels, ranging from the requirements to the software implementation. The right side corresponds to the integration and test phase, which can consist of a more detailed phase such as testing of software components or more abstract ones that employ integration of components.
On the left side of the ''V'', after defining the specifications and requirements of the system, models of the plant and controllers are obtained and simulated. Maybe some corrections could be executed at that level. All this is executed on a virtual level. The appropriate control hardware is selected, and the code is generated and uploaded. All of the parts are integrated and tested up to the acceptance tests (on the right side). This methodology is adequate for the development of complex systems and complies with the ASAM MCD-2 MC automotive standard used to define the description formats of ECU's internal variables utilized for measurement and calibration [34].

A. MODEL-BASED DEVELOPMENT
The Model-Based Development (MBD) is a method of developing software in a modular form, making the application level of software independent of the hardware. This development is only possible because the automotive software is divided in into a stack of layers, such that each layer provides services to the level above by standard function calls. In this sense, the ECU's RTOS (real-time operating system, such as RTA-OSEK) provides system calls to the applications, as well as services of basic software (BSW) that has the software drivers of input and output, which includes complex drivers (such as the injector). In the following we describe, using the V-model, the steps that would be followed by this kind of project, which is based on the model-based development (see section IV, and Fig. 2).

B. FUNCTION GENERATION
Function generation refers to the development of the knock detection function to be embedded in the ECU, corresponding to the red part of Fig. 2. Initially, taking into account only the logic behavior, this could be executed using block diagrams, programming languages, embedded system description language (ESDL), or even state machines. Those blocks

C. SOFTWARE DEVELOPMENT
In the software development phase, the automatic code generation is performed. This consists of compiling the block diagrams created in the previous phase in order to generate the A2L file (an ECU description file) and a HEX file (an ECU program), which is often used by the automotive industry. This corresponds to the blue part of the ''V'' in Fig. 2.

D. VIRTUAL TEST AND VALIDATION
In the virtual test and validation phase, the detection algorithm (function) is generated in the ECU, which is connected to a computer running a real-time operational system (RTOS). It works along with the ECU in a closed-loop configuration, running a virtual plant (in this case, the knock signal generator) that allows for a complete simulation of the real environment in which the ECU will be applied. The computer must be equipped with an analog-to-digital (and digitalto-analog) converter, as well as other signal conditioners, in order to generate and receive analog signals that can be directly exchanged with the ECU as if it was reading sensors in the real car. This type of computer is generally called a HIL (hardware in the loop) plant, and it can also simulate sensors and actuators models, as well as set-points from the car user.

E. MEASUREMENT AND CALIBRATION
The last phase of the V-Model corresponds to the test and validation phase using the real plant, i.e. the ECU is connected to a vehicle, where the final task of calibration can be executed. The calibration is executed by accessing the ECU through automotive protocols for measurement and calibration, such as CAN, CCP, or ETK.
Those protocols allow access to the command set, in order to: obtain values from the measurement variables, write values in the maps and calibration parameters online, and change these maps while the real plant (e.g.: engine and transmission) operates without needing ECU reprogramming. In this phase adjustments related to fuel, sensors, drivability, mileage, comfort, and others, are performed with no need for compiling another code.

IV. IMPLEMENTATION OF THE ADAPTIVE FILTER USING V-MODEL
The primary goals of this project are: • Generate the knock phenomenon in a test vehicle; • Collect the knock signals accordingly; and • Process the acquired signal using a specific hardware with the implemented solution in order to characterize the knock phenomenon.

A. SELECTION AND USE OF SIGNAL PROCESSING ALGORITHMS
In Section II-B, the proposed algorithms were introduced to solve the problem discussed in Section II-A. The use of adaptive filters must be justified, since they may carry higher computational complexity, as compared to power estimators. The use of adaptive filters is explained by the AP setting, as described in Section II-B. Considering Eq. (5), one can establish two operation scenarios for the proposed algorithms in the context of knock occurrence: (a) normal operation and (b) knock.

• (a) Normal Operation: u(i) = v(i).
In this scenario, since v(i) is white noise and u(i) is the resulting sensor signal, R u = Eu * u = σ 2 v I and R du = Eu(i)u * i−1 = 0. Therefore, w o i = 0 M ×1 , i.e. is numerically zero.

• (b) Knock: u(i) = s(i) + v(i).
In this scenario, damped oscillation measured in the sensor due to knocking is assumed to be a correlated signal that is embedded in AWGN, such that: with . Equation (5) requires the inversion of R u and by applying the matrix inversion lemma [29], [31] in R u : Applying such results in Eq. (5), as previously used in [31], and combining it with Eq. (17), implies that: Let c = s i−1 s i−1 * so that for each measurement: i.e.: the obtained w o i is numerically different from zero. This property can be efficiently explored to detect the knock phenomenon by establishing a certain threshold (THR) at which the optimal predictor components w o i must be checked at each iteration i and compared to the threshold. In order to better characterize the phenomenon, the entry [w o i ] 1 is chosen for this comparison (that is, n = 1), since the first entry of w o i tends to be the largest among all n components because of the s(i) correlation to itself [35]. Here, the threshold THR was determined empirically, but other investigations have explored finding it statistically [5], [12], [15], [20], [21], [25]. The next step is to efficiently design w o i . This design is complex since the signal parameters s(i) are not known a priori, as discussed in Section II-B. Therefore, the normal equations must be solved in an approximate form to adaptive algorithms (section II-B) [29]. With an initial estimate w −1 , w o i can be updated (optimized) in an adaptive manner, as: where p i is a directional update vector. Eq. (21) is the reference to the gradient descent algorithm, shown in Section II-B. If p i is chosen according to the stochastic law of gradient descent algorithm, the LMS and NLMS algorithms can be obtained, as shown in section II-B [29]. Both algorithms were successfully tested. If µ is correctly chosen, after some iterations, w i will be statistically close to w o i . If a certain threshold, THR, is carefully chosen, the adaptive algorithms can identify the knock. The only difference is that the tests were conducted with w i , instead of w o i , as w o i is unknown. Once the knock peak is detected, the time taken to return to THR can be interpreted as an indirect way to measure the knocking signal length, or its duration.

B. SOFTWARE WORKFLOW
The software workflow is the sequence of algorithms that are applied to the signal. The definition of this workflow is contingent to the analysis that is presented in the next section. After presenting such analysis, the software workflow is discussed in further detail along with the physical experimental results.

V. SIMULATIONS
This section presents several evaluations (scenarios) in a simulated environment using synthetic generated signals. This section shows that the selected algorithms can characterize the signals according to their duration and intensity. Several scenarios of knock occurrence were proposed, and their respective simulations were performed in MATLAB R code is available at https://github.com/rafa-coding-projects/ RTKnockChar. It is important to highlight that since the signals are simulated, the time axis in all plots under this section is fictitious and intentionally scaled to be representative of a signal having a knock event of around 7 KHz. Since the knock signal is observed from the knock sensor, it is measured in Volts. The output of the proposed digital filters is also measured in Volts.
The fast-tracking characteristic of NLMS, explained by the definition arising from Newton's method, could be combined with the characterization obtained by the power estimator for better identification of the phenomenon.The reliability of LMS and its computational simplicity, combined with the power estimator, can result in an easy and affordable solution to an embedded system. The properties mentioned regarding adaptive algorithms have been comprehensively explored by [29].
The phenomenon characterization needs to reduce or filter the background noise to the smallest possible value and emphasize the behavior due to the knock as much as possible. A combination of the algorithms can even promote an enhanced response at the end. For example, adaptive predictors can remove the sensor background noise from the knock signal. Then, this property may be used with the power estimator, taking into account that the latter alone does not present a good performance when filtering such noise [28].
In addition to these characteristics, the adaptive algorithms offer a softened response of the knock signal in the coefficient vector w i . Therefore, both softened responses of LMS and NLMS algorithms could be inputs to the power estimator.
The use of adaptive algorithms for the initial stage of the combination is explained by the fact that they are sensitive to the signal statistics, as shown in section IV-A, contrary to the power estimator, which is a signal power indicator. The power estimator does not present a satisfactory performance in identifying the signal in a scenario in which the knock amplitude is close to that of the background noise. However, the signal statistics change drastically due to the correlation arising from the knock. According to section IV-A, the first position of the vector w i is the one that should be used in the power estimator [28].
Then, distinct scenarios of knock signal, applied to different combinations of the LMS, NLMS, and power estimator, are evaluated. The knock profiles are a damped sinusoid with AWGN [28]. Random intensities and occurrences of the phenomenon are generated to evaluate the algorithms in a variety of scenarios, which are: 1) Normal knock: random knock occurrence with random amplitude; 2) Critical case: subsequent occurrence of a knock with each occurrence having an amplitude higher than the previous one; and 3) Knock and background noise with near amplitudes: similar to the normal knock scenario, but with amplitude necessarily close to the background noise. The critical case has received this name since it is a scenario in which the signal is highly correlated to itself for the most part, making the filtering more challenging.
In order to choose the parameters or tune the filter response, it is important to follow the relationship established in [29]. Under non-stationary conditions, the step size µ must comply with the following relation for mean square convergence and stability: for real values and: for complex values. Assuming λ k , an eigenvalue of the eigendecomposition of R u , it implies that λ k = σ 2 u . Therefore, replacing in Eq. (22) and Eq. (23): M σ 2 u µ < 1 − σ 2 u µ (real) and M σ 2 u µ < 2 − 2σ 2 u µ (complex). Then, for real values and, (25) for complex values. The Eqs. (24) and (25) show that the highest the order M , the lowest the step size µ should be.

A. SCENARIO 1: NORMAL KNOCK OCCURRENCE 1) LMS WITH POWER ESTIMATOR
In order to combine the power estimator with LMS using the coefficient vector w i , it has been shown that the combination of algorithms make the phenomenon characterization more efficient when compared to the individual performance [28]. The combination enabled a change in the LMS parameter setting, where the best result was obtained with µ = 0.3 and M = 3. Figure 4 shows that the combination of these algorithms using the LMS output practically missed some occurrences of the phenomenon with low intensity. This occurred even when increasing µ to 1 in order to obtain faster tracking. Therefore, the best setting to scenario 1 for the LMS and power estimator is using w i of LMS as the input of the power estimator. The result of the second attempted combination for this scenario is shown in Fig. 5.   [28], (middle) the vector of coefficients w i of LMS after filtering the signal, (top) the power estimator output, with w i of LMS being its input.

2) NLMS WITH POWER ESTIMATOR
The best performance of NLMS was obtained with µ = 0.25 and M = 2. The values of µ = 1, as used before, and with M > 2 presented good performance in signal tracking, but with higher variations.  Figure 6 shows that the characterization, using w i of NLMS as the input of the power estimator, resulted in an output with the knock signal emphasized in a softened way and also reduced the background noise to nearly zero, as initially intended. By making a comparison between the mentioned responses, and those obtained individually from the LMS and the power estimator, it is possible to notice how the combination of algorithms is efficient. Therefore, the best setting for scenario 1 between both algorithms is the one using the outputd of NLMS applied into the power estimator input. The result of the second attempted combination for this scenario is shown in Fig. 7.

B. SCENARIO 2: PHENOMENON CHARACTERIZATION IN A CRITICAL CASE 1) LMS WITH POWER ESTIMATOR
This combination is not displayed due to divergences found in the LMS algorithm. To combine with NLMS, the most accurate response to the input signal was obtained usingd. The result of the second attempted combination for this scenario is shown in Fig.8, where it is clear the reason why this scenario is called critical. The amplitude levels and the relationship between them of the input signal are not reproduced in the final output when using w i as the input of the power estimator, i.e.: the 4th peak FIGURE 8. Scenario 2 with combined algorithms. (Bottom) The generated knock signal according to the model proposed by [28], (middle) the vector of coefficients w i of NLMS after filtering the signal, (top) the power estimator output, with w i of NLMS being its input.  is higher than the second and third ones in the input, but this is not reflected at the output of the filter.

C. SCENARIO 3: KNOCK AND WHITE NOISE WITH SIMILAR AMPLITUDE 1) LMS WITH POWER ESTIMATOR
As shown in Fig. 10, a noise of higher intensity was observed in the output of the combined algorithm utilizing w i as the power estimator input. This can be taken as a spurious knock because it happened before the simulated set of knock. The combination in this scenario did not allow much flexibility in changing parameters for better performance. Only a small increase in µ was possible without lowering the LMS individual performance. However, no improvements were obtained comparing the combined algorithm against the individual performance of LMS for the same signal. Usingd, a combination with the power estimator softened variations obtained in its input, enabled an increase in the step size and, consequently, a better traceable performance. However, increasing the step size caused an increase in variations in the combined output. By increasing it to µ = 1 and the order to M = 10, a better characterization in terms of less variation in the power estimator output was observed.
In terms of signal intensity, the combination using w i presented a better performance, once the results obtained had a magnitude of around 10 −1 while the results of combinations usingd presented magnitudes around 10 −3 . In terms of characterization, the use of w i resulted in a softened response in the output, but with an irregularity in the second occurrence, a while before the peak. The use ofd was more accurate in characterizing the original signal, but presented several variations. The remaining of the system determined which combination should be used, considering their respective features. The results are presented in Fig. 10 and Fig. 11. FIGURE 11. Scenario 3 with combined algorithms. (Bottom) The generated knock signal according to the model proposed by [28], (middle) the outputd of LMS after filtering the signal, (top) the power estimator output, withd of LMS being its input.

2) NLMS WITH POWER ESTIMATOR
Utilizing w i with µ = 0.1, M = 3, = 0.05 provided the best results. The combination of algorithms also enabled the change of parameters to used, where the setting µ = 1 and M = 10 provided a better result if compared to their individual performance [28]. The results are presented in Figures 12 and 13.

VI. IMPLEMENTATION, PHYSICAL EXPERIMENTS AND RESULTS
Based on the simulations presented previously, we discuss next the combination of algorithms chosen, as well as its implementation and real-time observation.  The generated knock signal according to the model proposed by [28], (middle) the outputd of NLMS after filtering the signal, (top) the power estimator output, withd of NLMS as its input.

A. REAL-TIME IMPLEMENTATION AND SOFTWARE WORKFLOW
The NLMS adaptive filter was implemented using MBD, given that the LMS did not present satisfactory results as the first algorithm. The LMS was sensitive to correlated data since its convergence depended on input values, as shown in Eq. (8). As a result, the LMS characterization of knock phenomenon was poor, given that the knock is a highly correlated signal. Therefore, the NLMS was the only adaptive filter considered for experimental validation. Although Fig. 10 shows a less noisy result utilizing w i as the adaptive filter output, the required order for that configuration was M = 9, which we believed a bit expensive for real-time calculation, not to mention the result obtained at scenario 2, shown in Fig. 8. Therefore, following the premises and analysis of section V, we chose the parameters for NLMS that were found in scenario 1, which are µ = 0.25, = 0.651 and order M = 2. For the power estimator, we used β = 0.01 and α = 0.99.
The NLMS was connected to the power estimator, as shown in Fig. 14. This model was tested against others, and it provided the best characterization of the phenomenon. The output of the NLMS, which is connected to the power estimator input, is the estimated output d(i), which is the denoised version of the input. Finally, the characterization of knock phenomenon is observed at the output of the power estimator. The software execution workflow can be described by the steps shown in Algorithm 1.

B. EXPERIMENTS AND RESULTS
We proceeded to the experimental part with the model and the parameters chosen in section VI-A. The model-based software was developed using the model-based function generator ASCET (from ETAS) and then downloaded into the HIL ES630, also from ETAS. The results were measured in a Volkswagen Polo 2010 2.0, flex-fuel engine, placed on a passive dynamometer roll. Fig.17 shows the experimental setup, with a laptop working as a Human-Machine Interface, the HIL ES630 (blue cases at the right hand side), an oscilloscope to measure the knock signal, and the automobile. The vehicle was also equipped with an ECU developed by our research group (see Fig. 18 in the upper right side), which allowed the control of ignition timing. The spark angle in this ECU is advanced with respect to the vehicle nominal calibration point, in order to cause the knock for the experiments.
With that, the following procedure was executed: 1) Car parked placed in the dynamometer roll; 2) Start the car; 3) Start throttle with first gear engaged; 4) Wait the vehicle reaches no more than 10 km/h in initial gears, for example:

5)
Collect the signal arising from the knock sensor using the HIL. During the experiment, the throttle pedal was completely pressed. This provided a higher torque request to the engine via ECU because the engine speed is low, as well as the speed developed by the roll to the selected gear. This increased the pressure inside the chamber and, therefore, the temperature. Fig.19 shows why scenario 3 was created: high levels of noise can be observed at the sensor in either engine conditions off and on, respectively. Observing the noise levels also helped us to choose the appropriate threshold value used in practice for knock detection. The highest observed amplitudes with the engine working with no knock was of 0.25V for  this vehicle, whereas for knock event, we considered amplitudes higher than 0.5V as the knock occurrence threshold (THR), as shown in Fig. 20. Such threshold corresponded with the instant when the phenomenon was audibly detected. As observed in Fig. 20, using the established threshold, one can see that the filtered signal shows an envelope pattern that allows one to calculate the knock duration, as well as informing all distinct intensities with a smooth signal profile.

VII. CONCLUSION
The algorithms proposed here have shown the viability of characterizing the knock signal by combining adaptive filters and power estimators. The simulations helped to assess the algorithm performance and finding parameters that allow an efficient embedded real-time computation in a real flex-fuel engine, and this strategy could be generalized to gasoline engines as well. Combined to power estimator, an NLMS filter with M = 2 was used, meaning that the matrix operations had only 2 components in the processed vectors. The implementation, following the standard ASAM MCD-2 MC, allows professionals in the automotive industry to evaluate the proposed method. The MBD enables portability to other implementations that may have the knock characterization component as part of an integrated control software that performs engine management.
For future work, the results of this study, combined with further software development, could provide reliable information for knock characterization that could replace the current integrated circuit of knock treatment or work as a redundant device for safety purposes. Validation with more test signals would also be needed, to fully develop an integrated solution with the appropriate threshold for detection.
Three output variables could be easily obtained from the implementation of this study: one could inform the binary occurrence of a knock while the other two could provide the duration and amplitude of the detected knock. Integration with a control pipeline is probably needed, in order to assess the real impact in vehicle dynamics and engine management strategy, hence, allowing a comparison to the current state-ofthe-art integrated control methods.