Introduction
Industrial chillers are used in a variety of applications in which a liquid is circulated through process machinery. In this case study, the chilled liquid is water from plant chilled water (PCW), the refrigerant agent is hydrofluoroether (HFE) with its coolant agent (lubricant oil), and the targeted cooling process is a central processing unit (CPU). This cooling mechanism is to simulate a temperature environmental attack via a temperature control into a central processing unit (CPU) die layer on a high volume manufacturing (HVM) enviroment. A HVM is known to be continually subjected to disturbance, which cause deviation and stachostical in nature [1], [2]. The disturbance of this stachostical are relatable to HVM’s process monitoring [1] and tool variation [3], [4]. Even at optimal HVM preventive maintenance schedule to mitigate the risk of sudden failure a failure still occurs stachostically [5].
The core subjugated issue with HVM is that temperature changes are in rapid and continuous transformation. This leads to a phenomenon known as the kinetic theory of matter [6]–[10]. Under this rapid continuous transformation, microcracks are induced in the heat exchanger coil which creates an environment in which a mixture of chemicals is induced. The interconnection between these media eventually creates an environment where HFE and lubricant oil are forcefully combined. Thus by this effected conditioning, a pseudo mixture of these two chemicals within the heat exchanger eventually lead to CPU stain that ultimately contributed to the elimination of the CPU. Further investigation of the microcrack environment indicated that the contamination is in low sensitivity due to the inadequate amount of interconnection between these two chemicals. Thus an amplification and the pre-detection system are needed to assist and triggered if the microcrack environment happened and eventually leads to a robust detection system with the usage of lasers both as a light source and for detection amplification.
The detection methodology and its image capturing system deployed in HVM environment uses a hybrid technique of Particle Image Velocimetry (PIV) [11] and Dynamic light scattering (DLS) [12]. PIV is most commonly used as a quantitative method that able to measure the particle velocity on spatial and temporal domains relative to the planar or tomographic dimension [13]–[15]. References [13], [14], and [15] also mentioned the application of PIV-CNN on estimating a dense motion field which able to provide details (small-scale structures) of the turbulence flow. Reference [13] architecture starts by inputting a particle image pair and the output is a velocity field with displacement vectors at every pixel which provides a final reconstruction of particle image rather than a pure classification. While [14] architecture focuses more on regression-based PIV estimator rather than a classifier. Reference [15] applied an architecture that able to determines the likelihood of each area containing focused particles in the re-projected 3D image stacks that recreate and forecasted the velocity under the influences of flow field reconstruction. A DLS uses the recording of the fluctuations in scattering intensity overtime to characterize motion within the sample [16]. By quantifying these fluctuations, through either correlation or spectral analysis, diffusion coefficients can be calculated which in turn can be used to determine the hydrodynamic properties. In short, DLS measures how scattering changes over time, regardless of the amplitude of detection. This fundamental principle is further explained by [17]. In [18], the research emphasis is on predicting fluids containing nanoparticles and microparticles against the traditional DLS process. Reference [18] described the challenge and the objective of the stochastic process was to be generated using a deterministic method, the time series provided by consistent light scattered by a suspension. The research results concentrate on proof of concept using neural networks (NN) for the processing of DLS time series. Research in [19] approach of light scattering control to determine the functional relationship between transmitted and reflected speckle patterns using NN. In [20], the utilization of SVM is to predict the holographic conditioning of colloidal spheres which aligned with Lorentz-Mie theory that enables it to track each particle in three dimensions and measure its size and refractive index. Conclusively, PIV usability was focalized in a randomness repeatable motion via its statistical properties measurement which eventually contributed to its velocity attributes [21]–[23] whereas DLS acted as a backbone application for the fundamental of detection of a particle in static liquid form [17].
The basic foundation of HFE impurities detection can be described by imaging a condition where a laser hits the container (polycarbonate transparent coupler), the coupler is illuminated with a laser source, the scattered light yields a speckle pattern in the far-field. Thus the lightning distribution is by default consistent. All of the molecules in the HFE are being hit with the light and all of the molecules diffract the light in all directions [12]. The diffracted light from all of the molecules can either interfere constructively (light regions) or destructively (dark regions). The stochastical element described is the velocity in liquid form which being constantly bombarded with diaphragm pump fins. This to mimic quick and transformative chemical reactions between these two chemicals.
Ultimately both techniques focalize on detecting the stochastic behavior of the particle using a laser as main source illumination. This is due to the nature of detection, which is stochastic, eccentric and unpredictable. As stated by [24], the illumination with a laser source eventually scattered light yields with a speckle pattern. The speckle pattern is the property that eventually is harnessed to amplify the chemical contamination detection.
Due to the condition of HVM, which is stochastic by nature, a new architecture known as deep learning laser speckle contrast evolving spiking neural network (DL-LSC-ESNN) is introduced. The DL-LSC-ESNN utilizes CNN for its feature extraction layers. This is due to CNN’s ability to represent its input as a tensor in which local elements are correlated with one another [25].
The nature of HVM HFE contamination is highly erratic in its random particle displacement. To combat this a computational model is needed to capture and learn the whole patterns from data streams to more accurately predict future events for new input data. The brain-inspired ESNN has the ability to learn patterns using trains of spikes [26]. Furthermore, the 3D topology of a spiking neural network (SNN) reservoir has the potential to capture an entire pattern at any given time point [27]. This feature is a focal requirement and eventually leads to the choosing of ESNN architecture. A further characteristic of ESNN, as stated by [28], is that it can quickly adapt the new knowledge. The system performs dynamic adaptation to its synaptic weight either by assimilating similar information or creating a new synaptic weight repository [29]. Thus, this study will investigate and explore a highly robust mechanism for adaptability and evolving ability, eventually leadings to an image classification technique with the core combination strength of CNN (spike features extraction) and evolving ability of ESNN.
Monochromatic Polycarbonate Laser Attributes
This study experimented with the HFE laminar flow. As the HFE within the cylindrical polycarbonate containment chamber occurs when the laser hits the polycarbonate transparent coupler; the coupler is illuminated with a laser source, the scattered light yields a speckle pattern in the far-field. The spatial intensity distribution of speckles is dictated by the summation of the angular-dependent scattering efficiencies of the density fluctuations in the illumination/detection volume and by the phase relationship of the scattered fields [24]. Thus, the lightning distribution is by default consistent. All of the liquid molecules in the HFE polycarbonate chamber are hit with the light and all of the molecules diffract the light in all directions [12]. The diffracted light from all of the molecules can either interfere constructively (light regions) or destructively (dark regions). As in liquid form, light delays its traverse time, thus creating a pseudo-event of light absorption [12].
Further investigation of this study indicated that a laser speckle pattern is generated on the rough surface by laser irradiation, and the laser speckle particles relate to the laser wavelength and sample surface roughness and have a more standardized distribution [30]. To simplify the attribute dependencies, laser speckle is not directly influenced by temperature and is thus appropriate for the measurement of deformation in an extremely high-temperature setting that may compensate for the exclusion of artificial speckles. Here. the temperature can be excluded as not a dependency.
The correlation of the speckles which pass through a dynamic scatter was studied by [31], who found that this feature is related to the structure of the surface of scatter and laser coherence [32]. As the speckle pattern dependencies is on its surfaces, impurities in the HFE smart coupler designed with polycarbonates has to go through a process control to maintain its surface consistencies. Reference [33] mentioned that coherence for laser beams passing through a movement diffuser is based on the period of time of observation; time coherence and spatial coherence were considered. Speckle noise can obviously be caused by a spatially incoherent laser light source [34]. This can be countered by using a control mechanism to capture the laser images through a smart feedback system.
Over the years, the study of speckle was delineated by speckle contrast (SC) which was defined as the relationship of the standard deviation between intensity fluctuation and mean intensity. The fact that SC has a clear physical meaning and is mathematically convenient makes it appealing. SC varies according to changing mean intensity [35]. If translated by the similarity approach, the comparison is similar to the contrast representation.
Further research has indicated that the SC is supported by point spread function (PSF) tuning technique as a countermeasure. Due to its essential deconvolution processing and retrieving the object’s image [36], the PSF polarization is implemented, as it has proven useful for imaging targets in dispersing media and for enhancing the contrast of images [37]. Ultimately, the path indicated that a monochromatic lasers are subdivided into three categorical approaches: irradiation, coherence and randomize phases. Each categorical can be translated into a speckle contrast core attributes which are subdivided on two categories of fluctuation and mean intensity.
Further investigation for the study as per observed indicated that either mean or fluctuation intensity are not completed without proper representation via an encoding technique. This is due to the encoding will represented the information in much deeper and different perspective.
Fig. 1 shows the attributes of monochromatic lasers that will be used to encode the laser from its original images with SC properties, using the mean intensity approach [38] to spike train encoding of rank order population (ROP) by [39]. The encoding approach is to amplify the spike train domain before FE layer. This ensures that only the strongest essences of spike are captured and represented on the final FE before being fed into the ESNN layer.
The laser image will be divided into nine sections. Referring to Fig. 2, each section represent contrast attributes. Fig. 2 section 1, 3, 4, 5, 6, 7 and 9 heavily align with SC concept. Fig. 2 section 2 and 8 shows heavy contrast and brightness intensity. These attributes will be further encoded with the encoding scheme. This is to boost its dimensional additive.
A. Quantification and Formulation of Monochromatic Polycarbonate Laser Speckle Contrast Encoding
Laser Speckle Contrast Imaging (LSCI) is a sophisticated and useful imaging method that uses the speckles of a highly coherent light source (laser source), that are randomly generated on the image sensor. This imaging technique is regarded as an economical method to obtain information on the movement or flow of the target medium [38].
The speckle contrast can be determined by analyzing the reflection intensity of the image [38], [40]–[43]. The speckle contrast, \begin{equation*} K=\frac {\sigma _{std}}{\left \langle{ I}\right \rangle }\tag{1}\end{equation*}
Conventional CNN with forward and backward propagation flow. Iterative layer of convolution, ReLu and pooling are being utilized for FE.
As for the spike train encoding, this study utilizing ROP. The ROP is being chosen due to drawbacks of conventional spiking neural network (SNN) long simulation rate coding probabilistic conversion [48]. The ROP is proposed to alleviate this issues by encoding the SC images to pixel-based spikes. Further justification of this conversion process is the requirement to comply to ESNN usability. From a biological perspective, as the retina cells fire with remarkable temporal precision [49], a single spikes can, in principle, carry substantial information about visual stimuli [50]. Extensive justification indicates that the underlying idea behind ROP is that individual cells by themselves do not carry much information, but together, as a population, they are or could be sufficient. Reference [51] suggests that in synergistic encoding of information in the relative activities of a neuronal population is a feature of the retinal ganglion cells (RGC) responses at the population level. This observation indicates the effectiveness of applying ROP-based encoding, which in reality nearly emulates nature. The spiking encoding scheme accomplishes an information ventral stream. This means that both spiking rate and time can be used to represent the structural information within ventral stream. As the intensity of a stimulus increases, the rate of spikes increases to convey much more important information [52]. To summarize, the ROP true functionality is its ability to generated spikes by sequentially sorting input values from its pre-layers [48] after SC conversion.
Aforementioned the input information must be expressed in spikes in ESNN, Fig. 3’s conversion image must be encoded in spikes. ESNN is well known to be encoded in ROP encoding scheme. The ESNN encoding scheme requires the real valued dataset to be mapped into a sequence of spikes. To achieve this, a neural encoding technique is required. The ESNN utilizes ROP encoding as its encoding scheme. This technique was first described by [53]. Receptive fields allow continuous values to be encoded using a set of neurons with overlapping sensitivity profiles. Each input variable is represented independently by one dimensional receptive field unit M. For variable n of the interval [ \begin{equation*} C_{j}=I_{min}^{n}+\frac {2_{j}-3}2\left({\frac {I_{max}^{n}-I_{min}^{n}}{N-2}}\right)\tag{2}\end{equation*}
\begin{equation*} W_{j}=\frac {1}\beta \left({\frac {I_{max}^{n}-I_{min}^{n}}{N-2}}\right)\tag{3}\end{equation*}
\begin{equation*} output_{j}=exp\left ({-\frac {(x-C_{j})^{2}}{2W_{j}^{2}}}\right)\tag{4}\end{equation*}
\begin{equation*} T_{j}=\left |{T(1-output_{j})}\right |\tag{5}\end{equation*}
Convolution Neural Networks (CNN)
In the last few years, CNN has become an outstanding technology and has led to better performance in many fields. A CNN architecture has several iterative levels, including convolution, ReLu, and pooling layers. This layer is ostentatious, but non-linear [54]. The difference is that the importance of feature extraction layers is emphasized by CNN. To put into perspective, input representation in a low level can be transformed by each layer into an abstract representation.
Spiking CNN
A typical Spiking CNN consists of CL, PL, a fully connected linear classifier, and a spike counter. Its follow conventional CNN in the same order except the existence of spike counter. As the CL uses weight sharing to reduce the number of parameters as in a conventional CNN. The information in a spiking CNN is transferred via spike trains instead of real values [55], [56]. The spike train conventional generated via rank order or population based [48]. This to ensure that the neurons in CL detect more complex features by integrating input spikes from the previous layer which detects abstract visual features. The convolutional neurons emit a spike as soon as they detect their preferred visual feature which depends on their input synaptic weights [55], [57]. For each neuron is selective to a visual feature determined by its input synaptic weights on its specific map to detect the same visual feature but at a different locations. To this end, synaptic weights of neurons belonging to the same map should always be the same. As for the PL provide translation invariance using maximum operation, and also help the network to compress the flow of visual data [58]. Neurons in PLs propagate the first spike received from neighboring neurons in the previous layer which are selective to the same feature [57], [58]. The Spiking CNN’s CL and PL are arrange in consecutive order if the magnitude of FE is needed further. Fig. 5 shows the network architecture of a typical spiking CNN.
Ultimately the justification from Spiking CNN indicated that the CL are significant due to its ability to extract only important spike generated after ROP conversion [48], [58]. Each CL extracts features through spiking convolution process. Then, the pooling layer combines the outputs of neurons cluster in one feature maps (FM) into the input of one neuron in the next layer [58]. Pooling layer (PL) also behaves as a FM reducer by reducing the size of the FM by pooling maximum amplitudes given by CL [48].
A. Quantification and Formulation of CNN
The CL is parameterized by the size and number of maps, the size of the kernel, the skipping factors, and the connection table. Each layer is similarly sized with M maps (Mx, My). A kernel (blue rectangle in Fig. 6) of size (Kx, Ky) is relocated over the appropriate region of the image input (i.e. the kernel must be inside the image completely). The Sx and Sy skipping factors determine how many pixels the filter / kernel skips between subsequent convolutions in x and y-direction [59]. The size of the output map is then defined per Equation (6):\begin{equation*} M_{x}^{n}=\frac {M_{x}^{n-1}-K_{x}^{n}}{S_{x}^{n}+1};\quad M_{y}^{n}=\frac {M_{y}^{n-1}-K_{y}^{n}}{S_{y}^{n}+1}\tag{6}\end{equation*}
\begin{align*}&~Convolve_{\mathrm a,\mathrm b,\mathrm c}={\sum \nolimits _{\mathrm l=0}^{\mathrm k-1}}{\sum \nolimits _{\mathrm m=0}^{\mathrm w-1}}\tag{7a}\\&\sum \nolimits _{\mathrm n=0}^{\mathrm h-1}M_{\mathrm {sp}+\mathrm n,\mathrm {sq}+\mathrm m,\mathrm r}K_{\mathrm {lmn}}+B_{\mathrm {lmn}}\tag{7b}\end{align*}
The activation layer, consists of several activation functions. The activation functions include sigmoid, tanh and ReLu. Nowadays, the activation layer is typically ReLu, which from a mathematical perspective converts any non-zero to zero and linear to all positive values. Due to this, it is computationally cheap. Moreover, ReLu does not converge faster, as it does not suffer the vanishing gradient compared to sigmoid and tanh. In a sparse network, it is more likely that neurons are actually processing meaningful aspects of the problem. ReLu activation is referred to in Equation (8).\begin{align*} f(x)=\begin{cases}0&\quad for~f(x) < 0\\ f(x)&\quad for~f(x)\geq 0\end{cases}\tag{8}\end{align*}
\begin{align*} f(x')_{d,e,f}=\underset {a,b\in P_{i,j}}{max}~f(x)_{a,b,c},\tag{9a}\\ P_{i,j~is~the~window~for~Pooling~operation}\tag{9b}\end{align*}
Evolving Spiking Neural Network
The emerging ESNN [60], are definitely appealing in the context of HFE chemical contamination. An ESNN can grow and learn new information by evolving (e.g. adding) neurons without retraining it [61]. ESNNs function on-line by design, and ESNNs are given a quick updating and functional framework for adaptive online learning. This unique feature gives them the greatest benefit [62].
The above-mentioned developments in the use of large amounts of HFE data for online forecasts pose significant challenges linked to its stochastic character and its evolution over long periods [63], [64]. Without explicit specific plant models, ESNN’s most obvious advantage is that its neural networks can learn to carry out satisfactory tasks. In circumstances in which identical copies are hard to find, this is strongly favored [65].
ESNNs are modular decentralized-based systems that continuously, independently coordinate, on-line, adaptively, evolving, and interactively build their structure and functionality on the basis of incoming information [28]. Its topology is purely feedforward, arranged in several layers, and the relations between the neurons of existing layers are subject to changes in weight. Based on the evolution and the topology, for a complex and unpredictable nature of HVM, an extensive version of ESNN is needed. As mentioned earlier, an ESNN by nature has the ability to evolve its neuron repository. The evolving nature is a key element of its success, as discussed [28], [66]. The purpose of the learning method is to generate output neurons, each of which is marked with a class label. The number and value of class labels depend on the classification problem to solve. The learning algorithm generates successively during the presentation of training samples a pool of trained output neurons. The idea is a single repository is evolved for each class label.
A. Quantification and Formulation of Evolving Spiking Neural Networks
ESNN originated from the Evolving connectionist systems (ECoS) methodology, it was initially proposed by [61], and the architecture was intended to be used in visual pattern recognition [60]. From the perspective of a neural model, fundamentally it used a Leaky Integrated-and-Fire (LIF). This model was described by [67], as the information transfer is in spike domain. It describes the spike response as dependent upon arrival time. The early spike response is important due to the fact that the post-synaptic potential on the earlier spike is critical. This principle is fascinating because the brain is able to quickly and accurately measure even complex tasks using the information provided by these early spikes.
These early spikes generation fully utilizing the dynamism of Thorpe and Gautrais model are describe by dynamic of the post-synaptic potential ui(t) of a neuron i:\begin{align*} ui(t)=\begin{cases} 0&\quad If~fired\\ {\displaystyle \sum \nolimits _{j\vert f(j) < t}}w_{ji}m_{i}^{order(j)}&\quad Otherwise\end{cases}\tag{10}\end{align*}
As for the learning methodology of ESNN, it used the algorithm from [66]. ESNN algorithm is an equivalent to feed forward and organized in multiple layers. The objective of the ESNN’s learning method is to create output neuron, each with its own class label \begin{align*} w_{j}^{(i)}=&(mod)^{order(j)}~\nabla j\vert j~pre-synaptic~neuron~of~i \\ \tag{11}\\ \gamma _{i}=&PSP_{max(i)}.C\tag{12}\\ PSP_{max(i)}=&{\sum } jw_{j.i}mod^{(j)}\tag{13}\\ W_{j,i}=&\frac {w_{new}+(w_{j,i}.M)}{M+1}\tag{14}\\ \gamma _{i}=&\frac {\gamma _{new}+(\gamma _{i}.M)}{M+1}\tag{15}\end{align*}
Equation (14) and Equation (15) compare the trained neuron with the stored neuron in repository. If the Euclidean distance is smaller than (SIM), the trained output neuron is considered equivalent. As a result, the thresholds and weight vectors are assimilated according to Equation (14) and Equation (15) respectively. The merging uses an averaging of the connection weights and the average of two firing threshold. The merging will eliminate the trained neuron
Proposed Deep Learning Laser Speckle Contrast Evolving Spiking Neural Network (Dl-LSC-ESNN)
Due to the stochastical state of HVM, a new DL-LSC-ESNN architecture is proposed. As the core issues of the essence of HVM HFE contamination are highly unpredictable in random particle displacement in its laminar flow. To counter this a computational model must capture and learn all the trends from data streams in order to predict the most probable future events for new input data. One of DL-LSC-ESNN known components of ESNN has the ability to learn patterns by using ROP spikes [26] and is supported by a reservoir that has the potential to capture a whole pattern at any given time point [27]. ESNN, as described by [28], can quickly adapt the new knowledges by performing a dynamic adaptation to its synaptic weight [29]. Other components from the proposed DL-LSC-ESNN such as SC conversion at the input stages exist as a method to amplify the detection. As discussed, the SC conversion idea came from the fundamental of LSCI. The outcome of the investigation of this LSCI shown that the properties of laser speckle pattern generated on the rough surface of polycarbonate by laser irradiation, and the laser speckle particles relate to the laser wavelength and sample surface roughness have a more standardized distribution [30]. This eventually leads to straight forward and highly economical methodology to obtain the information on the laminar flow inside the polycarbonate chamber. This put the LSCI in the position of being harness as part of SC formulation and quantification. As for the CNN component integration with DL-LSC-ESNN, the integration aforementioned argument is on its reliable FE performance. CNN’s FE ability together with SC’s ROP integration eventually lead to tougher translational invariances immunity [58]. This distinctive ability and performance leads to the combination of all component and strengthens DL-LSC-ESNN adaptability to stochastic nature of HVM.
DL-LSC-ESNN architecture input starts by receiving the default image state in RGB format. The first step is the RGB conversion to SC domain. Next the SC domain conversion to grayscale pixel intensity in range (0,1) [55] is executed to intensify the brightness of laser speckle contrast (LSC). Fig. 7(B) is the LSC conversion. As a stimulation of actual pulses of human vision replication, the conversion of LSC to ROPE needs to be initiated. The conversion takes place from Equation (2) until Equation (5). Fig. 7 (C) visually depicts the conversion using ROP encoding [28], [62], [66], [68]. This concept previously used ROP as per stated in [52], [56], [57], [69].
Proposed Deep learning Laser Speckle Contrast Evolving Spiking Neural Network (DL-LSC-ESNN) architecture.
The technique to captured and stored final FM for visualization on the CNN approach.
The technique to captured and stored final FM for visualization on the proposed DL-LSC-ESNN approach.
Due to ESNN architecture being using as the backbone of this architecture, the sample sequence of spikes is encoded (spike trains) by using ROP scheme. Fig. 7(D), shows the conversion of the image into a newly formed image based neuron depth. The next step is to pre-process the image conversion of neural encoding into a convolution Equation (7a) using stochastic kernel and bias. This process generates the FM of the CL. By following a conventional flow of CNN architecture, the FM is fed into an activation function (ReLu) layer. ReLu function is to convert the negative value to zero and linear all passive value. Refer to Equation (8) and Equation (9a) for clarification and Fig. 7(E,F,G).
The PL’s function is to reduce the size of feature map. As stated, the kernel will scan for max value within its kernel window. The kernel window is known as max pooling, as it is commonly used throughout the research community. Pooling important information will always reduce the size of each feature map by a factor of two.
After pre-processing, the essence of this neuron conversion is fed into an ESNN neuron repository learning algorithm. The learning algorithm eventually create output neuron, each with its own class label I
The algorithm composed of combination of LSC, CNN and ESNN. Fig. 10 is the DL-LSC-ESNN algorithm flow. Its start by initializing the neuron repository NR = {}, ESNN and CNN parameter (line 1 until 4). All the
As mentioned above, CNNs are known to be good feature extractor. Their core utilization is to capture the essences of converted neural encoding
By capturing only the essences of its core neural encoding scheme, only the most important spike is contain. Next step is to utilize the ESNN training algorithm. For every pattern originating from the similar group, a new output neuron is generated and connected via weights
Experimental Hardware Setup
Due to the HVM chiller’s unpredictable nature, the experiment needed to further expand the detection method using a radical hardware approach. The discussion on the hardware setup is provided to clarify the technique of data collection and its main component. Fig. 11 is the high definition (HD) camera and laser chamber. The camera and laser chamber are designed to ensure the camera and laser are in a controlled environment. The chamber is designed using aluminum enclosure with black background as a control environment. On the lighting control, a monochromatic green laser (532nm) is chosen with a feedback control system via its switching regulator for a constant current and voltage supply. All the setup is obligated to ensure no external lighting will distort the internal chamber environment. Fig. 11 also shows the diaphragm pump, the functionality of the pump is to distort the targeted HFE chemical by using forceful kinetic motion of the diaphragm blade.
The system key element of detection is the observation windows. The observation windows used a polycarbonate material. The material is chosen to ensure that it was able to withstand extreme pressure and temperature. The system monitored chiller behavior via its internal tank circulatory, and the tank acted as a central collective for HFE chemical that is connected to heat exchanger and closely simulated the after effect condition. The after effect condition is when the evaporator and condenser are being process and thru its pseudo contact via heat exchanger eventually will provide the highest chemical combination. As the objective of the system is to detect the contamination of HFE inside the chiller tank, this method of testing uses ambient temperature; this is important for a safety perspective.
Results and Discussion
A. Dataset Visualization and Similarity Measurement
The visualization of impurities HFE dataset eventually provide an understanding of the level of difficulties of how the dataset structures are associated with each other. To perform this task, the visualization mechanism of t-distributed stochastic neighbor embedding (t-SNE) is applied. Historically, t-SNE [70] has been efficient when it comes to finding the underlying data structure. The main idea behind t-SNE is to preserve the local data structure through the preservation of pairs of data from the original space to the future space [71]. As previously discussed, the stochastic behavior of the impurities HFE detection relies on that to visualize the nature of the image data distribution. This visualization provides extensive understanding on the interaction of each sample of the data structure. The t-SNE with its ability to persevere and finding underlying data structure is the most suitable visualization technique for stochastic based dataset. On the other hand, t-SNE is particularly designed to avoid this dense clustering of impurities in HFE detection, which makes it possible to visualize large, high dimensional data systems in a clearer way [70]. Due to the research nature and inspired by the parametric t-SNE [72], the usage of tSNE is presented with an explicit nonlinear function in this research. The visualization of the raw image dataset is shown in Fig. 12.
The qualitative visualization of Fig. 12 shows that the data are non-separable or non-linear (PHFE indicated - Pure HFE and xPHFE indicated contaminated HFE). For a sample of impurities HFE dataset, there are no clear boundaries between subtypes of sample. They are highly comingled and certain data points see complete assimilation. This gives a clear picture of the difficulties to classify the HFE contamination and thus required LSC domain conversion and RN cubic form creation to increase the dimensional additive.
To further support the qualitative argument of original visual dataset assimilation. Fig. 13 (A) and (B) are the representation of cosine similarity between Pure and contaminate HFE. The visualization of the dataset utilized cosine similarity to measure the similarity of each images. As cosine similarity works really well on comparing each dataset irrespective of their size, the technique works by measuring the cosine angle between two vectors projected in multidimensional space [73]–[79]. As for the highest similarity value is referring to 1. From Fig. 13 (A), referring to original dataset the spread is mostly accumulated at 1. Which indicated that similarity strongly occurs. From Fig. 13 (B), the mean of similarity is at 0.996, spreading is between 0.997 and 0.995. This indicated how narrow and accumulated the similarity between pure and contaminate HFE dataset.
tSNE visualization of CNN result of 4 layer
tSNE visualization of DLESNN result of 4 layer
tSNE visualization of DLESNN result of 4 layer
Kernel
Kernel
Kernel
Kernel
Characterization of kernel
Characterization of kernel
B. Baseline Setup and Results
A proper accuracy validation between state of the art (known DL architecture) is needed against DL-LSC-ESNN architecture. This to ensure that DL-LSC-ESNN is highly competitive against well-known DL architecture. Here the study are comparing DL-LSC-ESNN against state-of-the-art architecture such as ResNet [80]–[82], VGGNet [83]–[85] and Inception Net [86]–[88]. Refer to Table 2 for state-of-the-art accuracy.
DL-LSC-ESNN was built upon a layer of rudimentary CNN. As aforementioned the CNN layers acted as FE for DL-LSC-ESNN. Due to the usability and functionality a characterization baseline is highly needed to further strengthen the validation accuracy via a rudimentary CNN baseline case study. For a rudimentary CNN experimentation the filter weights of each layer are initialized by randomly generated from Gaussian distribution (mean value
The baseline rudimentary CNN accuracy (refer to Fig. 26) being generated to further investigated the optimized layer for the highest accuracy for HFE dataset. By adding or lowering the layer, the accuracy offset gave a quartile of ~62.66 and ~58.11 with its maximum accuracy is at 63.64. The spreading indicated the diversity of the layer impacting accuracy. Thus indicated by adding more layer the accuracy will be further decreasing. For this experimentation, the CNN setup is a 4 layer with
C. DL-LSC-ESNN Characterization and Result
From the perspective of architecture differences by referring to Fig. 4 and Fig. 7, DL-LSC-ESNN adds layers to the LSC domain by utilizing Equation (1). The initial transformations of the domain from conventional CNN’s RGB (refer to Fig. 4) as its inputs to the LSC domain are achieved through Equation (1). DL-LSC-ESNN also added a dimensional additive layer of RN after LSC domain (refer to Fig. 7 C and D). Fig. 7 C, utilizing Equation (2) until Equation (5). Equation (2) and Equation (3) is the definitive parameter that necessitates being set as initial calculation (
As per Fig. 14 result show the highest accuracy at 64.94% with similarity (SIM) 0.3 and threshold (TH) 0.6. The characterization of parameter SIM is defined by Fig. 19 is within SIM
Fig. 15 and Fig. 20 are the results of 4 layer convolutions of DL-LSC-ESNN with
Fig. 19 and Fig. 20 provide visibility of observing the stability of kernel
Toward Fig. 20 of
Proceeding to Fig. 24 (A) and (B), the experimentation renders clarity upon Accuracy (kernel
The experimentation extends further by completing the characterization sweep on kernel
Ultimately, two states of high accuracy are achieved: for kernel
The finalization of DL-LSC-ESNN characterization gave final accuracy and its configuration, nevertheless by comparing it to RESNet (refer to Table 2), the
Conclusion and Future Works
The DL-LSC-ESNN eventually overcome conventional CNN for HVM HFE impurities LSC polycarbonate application. This is achievable through the added layers of LSC domain conversion, RN cubic form creation and ESNN neuron repository evolving ability to generate and assimilation of new weight. To put it into perspective DL-LSC-ESNN architecture are purely a feedforward algorithm, while conventional CNN using a combination of feedforward and backpropagation algorithm. The proposed technique of LSC domain conversion is highly practical in HFE impurities detection by significantly improving the
As for accuracy augmentation for LSC HFE impurities detection, several suggestions to the enhancement of accuracy need to be considered for future implementation and improvement. The exploration of applying probabilistic evolving spiking neural network (PESNN) architecture is the next inline implementation. The PESNN applies quantum probabilistic properties to ESNN weight. The approach is to create a probabilistic state of each neuron weight connection that eventually assist in providing a higher ability to assimilate or generated new neuron repository. The new additive process methodology should be able to augment accuracy detection even further.