Loading web-font TeX/Math/Italic
Aircraft Engines Remaining Useful Life Prediction Based on A Hybrid Model of Autoencoder and Deep Belief Network | IEEE Journals & Magazine | IEEE Xplore

Aircraft Engines Remaining Useful Life Prediction Based on A Hybrid Model of Autoencoder and Deep Belief Network


AIRCRAFT ENGINE AND SENSORS.

Abstract:

Remaining Useful Life (RUL) is used to provide an early indication of failures that required performing maintenance and/or replacement of the system in advance. Accurate ...Show More

Abstract:

Remaining Useful Life (RUL) is used to provide an early indication of failures that required performing maintenance and/or replacement of the system in advance. Accurate RUL prediction offers cost-effective operation for decision-makers in the industry. The availability of data using intelligence sensors leverages the power of data-driven methods for RUL estimation. Deep Learning is one example of a data-driven method that has a lot of applications in the industry. One of these applications is the RUL prediction where DL algorithms achieved good results. This paper presents an Autoencoder-based Deep Belief Network (AE-DBN) model for Aircraft engines’ RUL estimation. The AE-DBN DL model is utilized the feature extraction characteristic of AE and superiority in learning long-range dependencies of DBN. The efficiency of the proposed DL algorithm is evaluated by comparison between the proposed AE-DBRN and the state-of-the-art related method for RUL perdition for four datasets. Based on the Root Mean Square Error (RMSE) and Score indices, the outcomes reveal that the AE-DBN RUL prediction model is superior to other DL approaches.
AIRCRAFT ENGINE AND SENSORS.
Published in: IEEE Access ( Volume: 10)
Page(s): 82156 - 82163
Date of Publication: 05 July 2022
Electronic ISSN: 2169-3536

CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Remaining Useful Life (RUL) is used to ensure the safety and reliability of the aircraft equipment. It is used to decide whether to perform maintenance or not and how many spare parts should be ordered such as the overall maintenance cost is reduced [1]. If the lifetime of the aircraft equipment cannot be known with certainty, it is recommended to keep monitoring the life and state of operating equipment. Hence, monitoring of the equipment gives the required information on the current working age and state by measuring certain variables that may affect its operation work [2].

Existing research on the methods of the RUL estimation of the aircraft equipment can be generally grouped into physics-based methods and data-driven methods. In the physics-based approach, the RUL estimation model is developed using damage propagation to identify potential failure for the equipment. However, the complexity of the damage propagation as well as the uncertainty of the operating environment makes it extremely difficult to identify the potential failure mechanism for many components and systems. On the other side, data-driven methods employ data collected from sensors that are integrated with equipment to develop the RUL prediction model [3]. Aircraft are now fully integrated with advanced sensors that continually collect information regarding the operation condition of the aircraft. This data helps for the transition from physics-based methods to data-driven methods for RUL estimation of the aircraft equipment [4]. The power of data-driven methods for RUL estimation is arisen due to the rapid development of Internet-of-Things (IoT) and cyber physic technique which provides a massive amount of data. This data allows possibilities for Artificial Neural Network (ANN) methods such as Deep Learning (DL) to be applied [5]. In this aspect, the RUL prediction can be considered as time series regression problem.

DL utilizes multiple layers of neurons to learn complex models. Recently, DL has had a lot of applications in the industry [6] [7]. One of these applications is the RUL prediction where DL algorithms achieved good results. For example, authors in [8] developed an extended recurrent neural network (ERNN) algorithm for the RUL prediction model using vibration data collected from a gearbox experimental system. Deutsch and He in [9] proposed Restricted Boltzman Machine (RBM) algorithm for the RUL prediction model using data collected from bearing run-to-failure tests. Authors in [10] employed Long Short-Term Memory (LSTM) with Recurrent Neural Network (RNN) (LSTM-RNN) method for the RUL prediction model of the capacity degradation trajectories of lithium-ion batteries.

As shown above, DL methods have received a huge interest in RUL prediction for various applications. However, the prediction performance of these methods depends on the features, dependencies, and dimensionality of the input time-series data that is used to impellent the DL models for RUL prediction.

One of the advantages of Autoencoder (AE) DL architecture is its ability to capture and extract useful features from the input data. On the other hand, Deep Belief Network (DBN) architecture has the capabilities of deep hierarchical learning which allows it to capture the long-range dependence characteristic to learn sophisticated features from the input data. These features of DBN compared with the traditional shallow learning approaches yield improvement in prediction accuracy while keeping the prediction system simple.

Owing to feature extraction characteristic of AE and superiority in learning long-range dependencies of DBN, this paper presents a combined Autoencoder and Deep Belief Network (AE-DBN) based model for RUL estimation. In the proposed architecture the AE model is used to extract the abstract features from the input data while a Deep Belief Network model is used as a predictor for the time series RUL of the aircraft engine.

The remainder of this paper is organized according to the following. Section 2 investigates the related works regarding DL methods that are applied to perform an RUL prediction for aircraft. The proposed hybrid DL approach for RUL estimation is described in Section 3. The experiment study and comparisons with other DL methods using the C-MAPSS dataset are provided in Section 4. Finally, the paper concludes in Section 5.

SECTION II.

Related Works

With the help of advanced sensors that are integrated with the aircraft equipment, the data that represents the status of the aircraft becomes available. This data allows accurate RUL data-driven prediction models for an aircraft based on DL methods to be applied [4]. This section aims to review the related work of DL methods that are applied to aircraft for RUL prediction.

The early promising results to apply DL algorithms into RUL prediction of the engine of the aircraft were found by [11]. The Authors proposed Convolutional Neural Network (CNN) based regression approach for estimating the RUL. In this work, the structure of the CNN incorporates an automated feature learning from raw sensor signals. Later, the authors of [12] developed long short-term memory (LSTM) for estimating the RUL of the engine aircraft data. By comparing the result obtained from the LSTM with the results obtained from CNN, LSTM shows better performance.

The authors in [13] introduced a data-driven approach for RUL prediction based on Deep Convolution Neural Networks (DCNN). Raw collected data for the aero-engine unit is used to show the effectiveness of the proposed approach. The work in [14] used a stacked Sparse Autoencoder (SAE) for RUL prediction. The hyperparameters of the SSAE were determined based on the grid search method. The authors in [15] presented a combined DL algorithm for RUL prediction to predict multiple multivariate time series of the data collected from aircraft sensors was performed by LSTM based recurrent network. Moreover, Deep Belief Network (DBN) was utilized to assess system working conditions and categorize faults of aircraft equipment. In the same direction of using hybrid methods, the authors in [16] proposed a directed acyclic graph (DAG) network. This model combines long short term memory (LSTM) with the convolutional neural network (CNN) to predict the RUL. Based on the features of the training data collected from aircraft sensors, the method proposed in [17] utilized a modified Denoising Autoencoder (DAE) for robust feature extraction. The authors integrated an Updated Selection Strategy (USS) to ensure that the valuable data is passed through the training process. In terms of trucking the new rising data and forgetting gradually the old ones, Online Sequential Extreme Learning Machine (OS-ELM) was proposed with double dynamic forgetting factors (DDFF). Unlike the methods presented in [17], authors in [18] used LSTM to learn sequential features collected from aircraft sensors. Then, an attention mechanism with a feature fusion method is proposed for RUL estimation. A combined CNN with Bi-directional Long Short-Term Memory (BDLSTM) networks is presented in [19] for RUL prediction for aircraft. The CNN was used to extract spatial features while BDLSTM was utilized to extract temporal features. To improve the capability of the combination of CNN with LSTM, the work presents in [20] proposed a double-channel hybrid prediction model based on the CNN and a bidirectional LSTM network (CNN +BiLSTM). Besides, the work in [21] presented a new model combining between the parallel branches of CNN in series with LSTM. The model named multi-head CNN-LSTM. Recently, the authors in [22] proposed a new deep learning model combines between the transformer encoder and temporal convolution neural network (Trans.+TCNN). The transformer encoder is used a scaled dot-product attention to extract dependencies across distances in time series, whereas the temporal convolution neural network is built to fix the insensitivity of the self-attention mechanism to local features.

Owing to feature extraction characteristic of AE and superiority in learning long-range dependencies of DBN, this paper presents a combined Autoencoder and Deep Belief Network (AE-DBN) based model for RUL estimation. In the proposed architecture the AE model is used to extract the abstract features from the input data while a Deep Belief Network model is used as a predictor for time series RUL of aircraft engine.

To improve the prediction accuracy and the performance of aircraft engines RUL prediction a combined deep learning architecture based on AE for feature extraction and DBN for RUL time series prediction is proposed in this work.

SECTION III.

Proposed Deep Learning Architecture

This section presents the description of the proposed Autoencoder-based Deep Belief Network (AE-DBN).

A. Adeep Belief Networks

The Deep Belief Networks (DBN) can be viewed as a combination of simple unsupervised networks such as the Restricted Boltzmann Machine (RBM), which act as the hidden layer of each subnet and the visible layer of the next layer. The layer in the DBN structure has an efficient layer-by-layer procedure that determines how variables depend on variables in the layer above [23].

The developed DBN network consists of multiple visible and hidden RBM layers and logistic regression for classification in the last layer. In the first step of the development process, different feature spaces of the vectors are mapped, after that each layer of the RBM network is trained in an unsupervised way, respectively, to preserve the feature information of all. In the second step, fine adjustments are made. In the last step, the output feature vector of the RBM is taken as the input feature vector for the next RBM. The architecture of DBN is shown in Figure 1 [24].

Figure 1. - DBN architecture.
Figure 1.

DBN architecture.

In the RBM model, the v_{i} in the visible layer and each hidden layer are represented by h_{i} . Weights between v_{i} and h_{j} are guided and displayed with w_{ij} . Visible and hidden nodes have biases represented by vectors c and b . The b_{i},{c}_{i} and w_{ij} values of all RBMs in the model form the \theta parameter in the DBN. This parameter \theta appears in the model with the probability of common states of the hidden layers and an energy function [25]. This energy function is given as in Eq. 1.\begin{align*} \mathrm {E}\left ({\theta,v,h }\right)=&-\sum \nolimits _{\mathrm {i=1}}^{\mathrm {m}} \mathrm {v}_{\mathrm {i}}\mathrm {c}_{\mathrm {i}}-\sum \nolimits _{\mathrm {j=1}}^{\mathrm {n}} \mathrm {h}_{\mathrm {j}}\mathrm {b}_{\mathrm {j}} \\&-\,\sum \nolimits _{\mathrm {i=1}}^{\mathrm {m}} \sum \nolimits _{\mathrm {j=1}}^{\mathrm {n}} \mathrm {v}_{\mathrm {i}} \mathrm {h}_{\mathrm {i}}\mathrm {w}_{\mathrm {ij}}\tag{1}\end{align*}

View SourceRight-click on figure for MathML and additional features. Since there is no inter-layer connection in the DBN network, the probability distributions of the visible and hidden layers are calculated as given in Eq. 2 and Eq. 3 \begin{align*} P\left ({v_{i}=1\left |{ h }\right. }\right)=&1 \mathord {\left /{ {\vphantom {1 {1+e^{-b_{i}\sum {h_{i}w_{ij}}}}}} }\right. } {1+e^{-b_{i}\sum {h_{i}w_{ij}}}}\tag{2}\\ P\left ({h_{i}=1\left |{ v }\right. }\right)=&1 \mathord {\left /{ {\vphantom {1 {1+e^{-c_{i}\sum {v_{i}w_{ij}}}}}} }\right. } {1+e^{-c_{i}\sum {v_{i}w_{ij}}}}\tag{3}\end{align*}
View SourceRight-click on figure for MathML and additional features.
After the weight calculations are completed, the reconstructed data is returned and can be determined by the p(v\vert h) calculation. Output \sigma occurs when data is transmitted back to the hidden layer. Here, the logistic function \sigma is defined as in Eq. 4.\begin{equation*} \sigma \left ({x }\right)=(1+e^{-x})^{-1}\tag{4}\end{equation*}
View SourceRight-click on figure for MathML and additional features.
Likewise, in the case of {v}_{i}=1 , the conditional probability of v_{i} is calculated as given in Eq.5.\begin{equation*} P\left ({{v_{\mathrm {i}}=1}\thinspace \vert \thinspace v}\right ) =\sigma \left(a_{\mathrm {i}}+\sum \nolimits _{\mathrm {i=1}} {W_{\mathrm {ij}}h_{\mathrm {j}}}\right)\tag{5}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

B. Deep Autoencoders

Autoencoder (AE) is a three-layer unsupervised neural network. It is considered the most basic form of neural networks, which is used for representation learning such as feature selection or size reduction and tries to reconstruct the input patterns in the output layer [26].

The general feature of AE is that the input and output layer size is the same as with symmetrical architecture. The hidden layer in the network model typically contains fewer neurons compared to the visible layer. By using a small number of neurons, an attempt is made to encode or represent the input in a more compact way, capturing the meaningful properties of the input vectors. Figure 2 shows a basic AE and deep AE (DAE) network architecture [27].

Figure 2. - AE architecture.
Figure 2.

AE architecture.

Training of AE is carried out by applying a backpropagation algorithm as in feedforward neural networks based on Mean-Squared Error (MSE) loss function. The training process consists of two stages, coding, and decoding. In the coding phase, while trying to encode the inputs in a hidden representation by using the weight criteria of the lower half layer. In the decoding phase, the same input is tried to be reconstructed from the coding representation using the criteria of the upper half layer. Therefore, the encoding and decoding weights are forced to be transposed of each other. Consider X is data with n samples and m features. Y is the output of the encoder (i.e. reduced representation of X). The mathematical representations of the encoding and decoding operations for a basic AE are given in Eq. 6 and Eq.7, respectively [28].\begin{align*} \mathrm {Y}=&\mathrm {f}(\mathrm {wX}+\mathrm {b})\tag{6}\\ \acute {\text {X}}=&\mathrm {g}(\acute {\text {w}}\mathrm {Y}+\mathrm {b})\tag{7}\end{align*}

View SourceRight-click on figure for MathML and additional features. In Eq. 6 and Eq. 7, w and b represent the adjustable parameters, f and g are the activation function, \acute {\text {w}} is the weights (\acute {\text {X}} ) transpose, and \acute {\text {X}} is the reconstructed input vector in the output layer.

Training an autoencoder involves finding parameters w and b that minimizes the error between the input data X and the reconstruction data \acute {\text {X}} .

C. The Proposed AE-DBN

The proposed AE-DBN architecture is shown in Figure 3. It is consisting of two main parts. In the first part, the AE is used as a deep learning model for feature extraction from the input data. The second part is represented by a deep learning model based on DBN for predicting the RUL.

Figure 3. - AE-DBN architecture.
Figure 3.

AE-DBN architecture.

The encoder part of the AE is responsible for extracting the features which represent the characteristics of the input data. These extracted features are feed to the DBN part of the proposed architecture which is used to predict the RUL. Initially, the AE is trained separately to obtain the weight matrix before training the DBN RUL perdition model. The decoder part of AE is used to verify the extracted features’ validity to reconstruct the original data. Then the obtained weight matrix for AE is combined with the DBN model which is finally trained using the input data for RUL prediction.

SECTION IV.

Expermental Study

A. Dataset

The widely used dataset for predictive maintenance in aircraft health systems in the literature is NASA Turbofan Engine Corruption Simulation dataset [29]. The dataset was created by NASA engineers using commercial simulation software called C-MAPSS. A turbofan engine with a capability of producing 90,000 pounds of power is simulated at altitudes of 0-40,000 ft., Mach number range of 0-0.9 and ambient temperatures range of -60 to 103 degrees F. The attributes of aircraft engines used for experimentation were the engine core speed, engine fan speed, pressure at fan inlet, High-Pressure Turbine (HPT) exit temperature, pressure at the High-Pressure Compressor (HPC), and engine-pressure ratio. To monitor these engine’s attributes, a total of 21 onboard sensors monitoring temperature, pressure, and speed are distributed at various locations as shown in Figure 4.

Figure 4. - Aircraft engine and sensors.
Figure 4.

Aircraft engine and sensors.

The C-MAPSS software also has various regulators and limiters that prevent the engine from being taken out of the operating range specified by the manufacturer.

During creating the dataset, the engine was operated together with the control system. When the health index of the motor dropped to zero, the simulation was stopped and the obtained sensor data were recorded as a time series. The engine health index is determined to be 1 for each of the parts that make up the engine, and 0 when it goes out of the specified operating conditions. While the training data was continued until the motor health index was 0, the test and verification data were terminated before the motor broke down in order to measure RUL. The difference between the cycles in which the engine health index falls to zero at any given moment gives the RUL value of the engine.

Four different datasets are considered which are prepared for different operating conditions and different scenarios in the dataset. The FD001 and FD003 contain a single operation condition and fault type for 100 engines. The FD002 includes six operation conditions and 260 engines. The FD004 includes six operation conditions and 249 engines. The datasets samples contain the records of sensors data at each run to fail cycles for each engine which is used to train the model for predicting the RUL. The details of each dataset used to train and evaluate the proposed method for RUL prediction are shown in Table 1 below.

TABLE 1 Evaluation Datasets Description
Table 1- 
Evaluation Datasets Description

B. Model Design

The proposed AE-DBN architecture contains different key structure variables such as the number of input nodes, the number of hidden layers for the AE model, the number of RBM stages for the DBN model, the number of hidden layers for the DNB model, and the model’s hyperparameters. The key structure features that are considered for designing the proposed model are obtained through experimental trials where the objective is achieving the best RUL prediction performance. The parameters for each model are summarized in Table 2 and Table 3.

TABLE 2 AE Models Parameters
Table 2- 
AE Models Parameters
TABLE 3 DBN Models Parameters
Table 3- 
DBN Models Parameters

C. Performance Evaluation

In order to verify the effectiveness of the proposed AE-DBRN method for RUL perdition, the model is trained and evaluated using the four datasets (FD001, FD002, FD003 and FD004) explained in the previous section. To avoid model overfitting the datasets are split into 60% for training and 40% for testing samples. A comparison is carried out between the proposed AE-DBRN and the state-of-the-art related method for RUL perdition such as DBN, CNN, LSTM, and attention-based LSTM. Commonly used evaluation criteria such as Root Mean Square Error (RMSE), Mean Absolute Error (MAE), R^{2} and Score are considered to evaluate the model performance in terms of RUL prediction accuracy [30]. These metrics are calculated as given in Eq.8, Eq.9, Eq. 10 and Eq.11 respectively.\begin{align*} RMSE=&\sqrt {\sum \nolimits _{1}^{N} {\left ({a_{i}-p_{i} }\right)^{2}/N}}\tag{8}\\ MAE=&\sum \nolimits _{1}^{N} \left |{ p_{i}-a_{i} }\right | /N\tag{9}\\ R^{2}=&1-\sum \nolimits _{1}^{N} \left ({a_{i}-p_{i} }\right)^{2} \mathord {\left /{ {\vphantom {\sum \nolimits _{1}^{N} \left ({a_{i}-p_{i} }\right)^{2} \sum \nolimits _{1}^{N} \left ({a_{i}-p_{mean} }\right)^{2}}} }\right. } \sum \nolimits _{1}^{N} \left ({a_{i}-p_{mean} }\right)^{2} \\ \tag{10}\\ Score=&\begin{cases} \displaystyle \sum \nolimits _{i=1}^{N} \left ({e^{-\frac {a_{i}-p_{i}}{13}}-1 }\right) & for~a_{i}-p_{i} \\ \displaystyle \sum \nolimits _{i=1}^{N} \left ({e^{\frac {a_{i}-p_{i}}{10}}-1 }\right) & for~a_{i}\ge p_{i} \\ \displaystyle \end{cases}\tag{11}\end{align*}

View SourceRight-click on figure for MathML and additional features. where N is the number of samples, a_{i} is the actual value and p_{i} is the predicted value and p_{mean} is the mean of the predicted values.

The proposed AE-DBN model is trained and evaluated using FD001, FD002, FD003 and FD004 datasets separately using the parameters explained in the model design section. Based on the selected window length the model predicts the RUL of a particular engine based on the previous 50 samples of sensors readings. To demonstrate the effectiveness of AE for extracting the features that provide a better representation of the data, the proposed AE-DBN is compared with the regular DBN method that feeds directly with the original data. The evaluation results are shown in Table 4.

TABLE 4 Experimental Results Using FD001 and FD004 Datasets
Table 4- 
Experimental Results Using FD001 and FD004 Datasets

Based on the achieved results for the proposed AE-DBN model and the regular DBN model it was found that adding the AE model for features extraction enhanced the overall model performance in terms of accuracy since it was used to reduce the original data features to extract the best respective features. The proposed AE-DBN model achieved better performance using the FD001 and FD003 dataset compared to the FD002 and FD004 dataset since it contains more samples and includes multiple operation conditions which can strain the model in terms of learning.

The predicted and the actual RUL using the AE-DBN model on the four datasets for each engine using the previous 50 samples of sensors readings are shown in Figures (5–​8).

Figure 5. - Actual RUL and the predicted RUL for FD001 dataset.
Figure 5.

Actual RUL and the predicted RUL for FD001 dataset.

Figure 6. - Actual RUL and the predicted RUL for FD002 dataset.
Figure 6.

Actual RUL and the predicted RUL for FD002 dataset.

Figure 7. - Actual RUL and the predicted RUL for FD003 dataset.
Figure 7.

Actual RUL and the predicted RUL for FD003 dataset.

Figure 8. - Actual RUL and the predicted RUL for FD004 dataset.
Figure 8.

Actual RUL and the predicted RUL for FD004 dataset.

Based on results shown in Figure (5–​8) of the predicted RUL for each engine matches very well with the actual RUL, which indicates the viability of the proposed AE-DBN model for the RUL prediction.

For example based on the FD001 dataset, it can be noticed that the RMSE, MAE and Score are decreased from 13.45, 14.19 and 228 in the case of the standard DBN to 11.27, 11.91 and 219 for the proposed AE-DBN whereas the \text{R}^{2} is increased from 0.9405 in the case of the standard DBN to 0.9545 for the proposed AE-DBN. For FD002 dataset, it can be noticed that the RMSE, MAE and Score are decreased from 17.55, 19.15 and 1379 in the case of the standard DBN to 14.24, 14.85 and 1255 for the proposed AE-DBN whereas the \text{R}^{2} is increased from 0.9120 in the case of the standard DBN to 0.9411 for the proposed AE-DBN. For FD003 dataset, it can be noticed that the RMSE, MAE and Score are decreased from 12.32, 13.25 and 285 in the case of the standard DBN to 11.13, 11.48 and 264 for the proposed AE-DBN whereas the \text{R}^{2} is increased from 0.9452 in the case of the standard DBN to 0.9513 for the proposed AE-DBN.

In the same way for the FD004 dataset, it can be noticed that the RMSE, MAE and Score are decreased from 28.5444, 28.9798 and 2147 in the case of the standard DBN to 26.8508, 27.3347 and 2135 for the proposed AE-DBN whereas the \text{R}^{2} is increased from 0.8896 in the case of the standard DBN to 0.8999 for the proposed AE-DBN.

In addition, the proposed AE-DBN performance in terms of RMSE for RUL prediction is compared with a similar state-of-the-art method in the literature that used the FD001, FD002, FD003 and FD004 dataset as shown in Table 5.

TABLE 5 RMSE Comparison Between the Proposed Method and Related Works Using FD001, FD002, FD003 and FD004 Dataset
Table 5- 
RMSE Comparison Between the Proposed Method and Related Works Using FD001, FD002, FD003 and FD004 Dataset

Based on the comparison results in Table 5 for the FD001 dataset, it can be seen that the proposed model reduces the RMSE from 18.45 in the case of regular CNN, 16.14 in the case of regular LSTM, 14.53 in the case of Attention-based LSTM, 13.27 in the case of MHCNN + LSTM, 12.5 in the case of BiLSTM + MSCNN, 12.31 in the case of Trans. + TCNN, 11.96 in the case of DAG network to 11.27 for the AE-DBN DL model. For FD002, the RMSE reduces form 30.29 in the case of CNN, 24.49 in the case of LSTM, 20 in the case of DAG network, 19.49 in the case of MHCNN+LSTM, 19.34 in the case of BiLSTM+MSCNN, 15.35 in the case of Trans.+TCNN to 14.24 for the proposed AE-DBN DL model.

For FD003, the RMSE reduces form 19.82 in the case of CNN, 16.18 in the case of LSTM, 13.21 in the case of MHCNN+LSTM, 12.466 in the case of DAG network, 12.32 in the case of Trans.+TCNN, 12.1 in the case of BiLSTM+MSCNN to 11.13 for the proposed AE-DBN DL model.

For the FD004 dataset, it can be seen that the proposed model reduces the RMSE from 29.16 in the case of the regular CNN, 27.08 in the case of Attention-based LSTM to 26.85 for the proposed AE-DBN DL model. However, DAG network, BiLSTM+MSCNN, MHCNN+LSTM, Trans.+TCNN have shown better result than the proposed model.

To end this, the proposed AE-DBN performance in terms of Score for RUL prediction is compared with a similar state-of-the-art method in the literature that used the FD001, FD002, FD003 and FD004 dataset as shown in Table 6.

TABLE 6 Score Comparison Between the Proposed Method and Related Works Using FD001, FD002, FD003 and FD004 Dataset
Table 6- 
Score Comparison Between the Proposed Method and Related Works Using FD001, FD002, FD003 and FD004 Dataset

Based on the comparison results in Table 6 for the FD001 dataset, it can be seen that the proposed model reduces the Score from 1286 in the case of regular CNN, 338 in the case of regular LSTM, 322 in the case of Attention-based LSTM, 259 in the case of MHCNN + LSTM, 231 in the case of CNN + BiLSTM, 252 in the case of Trans. + TCNN, 229 in the case of DAG network to 219 for the AE-DBN DL model. For FD002, the Score reduces form 13570 in the case of CNN, 4450 in the case of LSTM, 2730 in the case of DAG network, 4350 in the case of MHCNN+LSTM, 2650 in the case of CNN + BiLSTM, 1267 in the case of Trans.+TCNN to 1255 for the proposed AE-DBN DL model.

For the FD003 dataset, it can be seen that the proposed model reduces the Score from 15962in the case of regular CNN, 852 in the case of regular LSTM, 535 in the case of DAG network, 343 in the case of MHCNN + LSTM, 296 in the case of Trans. + TCNN to 264 for the AE-DBN DL model. However, CNN + BiLSTM have shown better result than the proposed model with a Score of 257. For the FD004 dataset, it can be seen that the proposed model reduces the Score from 7886 in the case of regular CNN, 5550 in the case of regular LSTM, 4340 in the case of MHCNN + LSTM,3400in the case of CNN + BiLSTM, 3370 in the case of DAG network, to 2135 for the AE-DBN DL model. However, Trans. + TCNN have shown better result than the proposed model with a Score of 2120.

In general, this comparison proved that the proposed AE-DBN outperforms the similar related works based on the three dataset out of four based on the RMSE and two out of four based on the Score index.

SECTION V.

Conclusion

Deep Learning (DL) based methods have been proven to be very promising for the Remaining Useful Life (RUL) prediction of equipment. This paper proposed an Autoencoder-based Deep Belief Network (AE-DBN) based model for RUL prediction for aircraft engines. An experimental study was conducted using a published dataset of aircraft engines data to evaluate the effectiveness of the proposed RUL prediction model. To investigate the estimation performance, the AE-based DBN is compared with the standard DBN model. The results show a considerable improvement of the AE-DBN in comparison with standard DBN in terms of RMSE, MAE, \mathrm {R}^{2} and Score. Moreover, the results are also compared with other DL algorithms. The results show that a three out of four dataset (FD001, FD002 and FD003), the RMSE of the proposed AE-DBN model is less than other state-of-the-art related method for RUL perdition. Besides, two out of four (FD001 and FD002), the Score of the proposed AE-DBN model is less than other state-of-the-art related method for RUL perdition. The overall results reveal that the AE-DBN RUL prediction model outperforms the state-of-the-art works and the standard DBN RUL prediction model. As future research, this work can be extended into two directions. The first one is to explore the ability to utilize the swarm-based optimization techniques to determine the optimal hyperparameters of the DL model to achieve higher accuracy and less implementation complexity. On other hand, hybrid with other DL algorithms to boost the performance of the RUL prediction can be considered as another future work.

References

References is not available for this document.