Production Assessment using a Knowledge Transfer Framework and Evidence Theory

—Operational knowledge is one of the most valuable assets in a company, as it provides a strategic advantage over competitors and ensures steady and optimal operation in machines. An (interactive) assessment system on the shop ﬂoor can optimize the process and reduce stopovers because it can provide constant valuable information regarding the machine condition to the operators. However, formalizing operational (tacit) knowledge to explicit knowledge is not an easy task. This transformation considers modeling expert knowledge, quantiﬁcation of knowledge uncertainty, and validation of the acquired knowledge. This study proposes a novel approach for production assessment using a knowledge transfer framework and evidence theory to address the aforementioned challenges. The main contribution of this paper is a methodology for the formalization of tacit knowledge based on an extended failure mode and effect analysis for knowledge extraction, as well as the use of evidence theory for the uncertainty deﬁnition of knowledge. Moreover, this approach uses primitive recursive functions for knowledge modeling and proposes a validation strategy of the knowledge using machine data. These elements are integrated into an interactive recommendation system hosted on a backend that uses HoloLens as a visual interface. We demonstrate this approach using an industrial setup: a laboratory bulk good system. The results yield interesting insights, including the knowledge validation, uncertainty behavior of knowledge, and interactive troubleshooting for the machine operator.


I. INTRODUCTION
Sustaining operational know-how guarantees companies an advantage over competitors.This can be achieved by establishing best practices that ensure the optimal operation of machines and recording troubleshooting approaches that reduce downtime [1].This knowledge has been accumulated in company logs over the years.In the best of cases, it is recorded in the form of best-practice manuals, maintenance documents, and troubleshooting guides, so that future machine operators can access it.This type of knowledge is referred to as explicit knowledge.In comparison, tacit or implicit knowledge refers to the empirical expertise gained by operators on the shop floor.The transformation of implicit knowledge into explicit knowledge is not an easy task.The reasons for this involve lack of adequate knowledge transfer strategies, procedures and tools for institutional knowledge internalization [2].Knowledge transfer has been addressed through different strategies, such as peer-to-peer communication, producing written sources (e.g., books and user guides), audiovisual guides, or immersive augmented reality (AR) and virtual reality (VR) applications.However, some of these strategies might introduce bias in the acquired knowledge [3] (e.g., in peer-to-peer communication where the sender chooses the information shared in terms of perceived relevance).Additional challenges include the quantification of knowledge uncertainty, and effective strategies to validate the extracted knowledge.Knowledge transfer involves different stages such as extraction, modeling, uncertainty definition, validation, and visualization [4] [5] [6] [7].Thus, defining a knowledge transfer framework would provide a substantial step towards interactive knowledge transfer as it would clearly identify each stage and the relevant challenges to be addressed.A knowledge transfer framework would allow the deployment of an interactive assessment system on the shop floor, which would provide a constant flow of valuable information concerning machine conditions to the operators, as well as a set of recommendations geared at solving process issues.
This article proposes a novel methodology for production assessment based on an interactive knowledge transfer framework in which evidence theory is an intrinsic part.It provides a knowledge transfer framework that considers all knowledge stages, as it allows the extraction and transfer of knowledge, and is facilitated through a user interface (UI).This research identifies the existing challenges for each step in the knowledge chain and methodologically addresses them.
The contributions of this paper are summarized as follows: • Defining a methodology for the formalization of tacit knowledge based on an extended failure mode and effects analysis (FMEA) to extract knowledge from an expert panel objectively and systematically.In addition, the Dempster-Shafer evidence theory was used to evaluate the existing uncertainty factors in the extracted knowledge.• Presenting the use of primitive recursive functions to create a knowledge-based model.This model integrates the knowledge extracted from an extended FMEA and the uncertainty defined through evidence theory.
• Presenting a strategy for knowledge validation based on key performance indicator (KPI) analysis.The KPIs are calculated using machine data in short-and long-term periods to consider the effects of knowledge across time.• Finally, defining a strategy to embed a knowledge transfer framework into an interactive assessment system hosted in a backend.The assessment system uses an augmented reality device as the UI to enhance user experience.Moreover, we demonstrate this approach using a smallscale industrial setup.
This article is organized as follows.In Section II, we present state-of-the-art knowledge transfer stages, quantification of knowledge uncertainty, and production assessment applications.Section III introduces the proposed model and methodology to address the knowledge chain.Section IV describes a practical implementation of this approach in an industrial setup.Finally, Section V presents the findings and remarks of the authors and provides an outlook for further research.

II. RELATED WORK
This article presents an approach to production assessment using an interactive knowledge transfer framework.The knowledge transfer framework addresses three main points: the formalization of tacit knowledge into explicit knowledge, quantification of knowledge uncertainty, and definition of a strategy to embed the knowledge framework into an interactive assessment system.Fig. 1 shows the stages of the knowledge transfer framework.The extraction of tacit knowledge involves a knowledge extraction method.For this purpose, different methodologies have been proposed, which include, among others, the use of question-answering systems [8] [9] [10], the use of process mapping [11], ontologies [12] [13], and lean manufacturing tools, such as failure mode and effects analysis (FMEA) [14] [15].However, knowledge extraction can be challenging, because the information should be representative and free of bias.Moreover, the knowledge extraction process can be timeconsuming (e.g., especially for question-answering systems and interviews), or in the case of ontologies that require a high level of detail and implementation.Therefore, we require a methodology that allows us to store information systematically, frequently, and without bias [16] to provide the machine operator with an objective, representative, and relevant assessment.Taking a systematic approach would allow overcoming information bias, as it would avoid subjective preferences in one expert's opinion.
When establishing a body of explicit knowledge, one crucial feature is the uncertainty quantification of the knowledge or confidence level concerning expert statements.Knowledge inaccuracies can be attributed to slightly detailed information or the existence of new unknown events.Knowledge uncertainty has been represented using fuzzy systems [17], evidence theory [18] [19] [20], or hybrid systems [21] [22].The next step in consolidating acquired knowledge is to define a strategy for knowledge uncertainty quantification.

B. Knowledge Modeling and Validation
Knowledge modeling has been addressed using (rule-based) expert systems [23] [24], ontologies [12] [13], fuzzy systems [17], and knowledge graphs [25], among others.However, modeling the extracted knowledge can be an exhausting task because of the detailed level of information to be provided, especially for expert systems and ontologies.The upside of such models is the inclusion of the expert domain in modeling, whereas the downsides are the risk of biased information, increased time consumption owing to the number of rules to be defined and the subjectivity.The maintenance of the knowledge-based model needs to be considered as the knowledge would be modified frequently.
To this end, a knowledge extraction methodology using FMEA was presented in [26], where the authors proposed a strategy to formalize tacit knowledge, specifically machine faults, into a knowledge-based model in a systematic manner.This knowledge-based model contains rules that can be triggered using machine data.However, although the authors proposed a strategy to digitize the knowledge quickly, and although the recommendation system detected the faults in the process, no uncertainty quantification of the knowledge rules that could assess the certainty of the triggered fault to the machine operator was provided.
Knowledge validation is necessary to evaluate the effectiveness of the knowledge models.An evaluation performed using machine data can provide objective insights based on machine performance.Key performance indicators (KPI) have been used to assess the performance of machines and processes, such as acceptance rate, mean downtime, and operating time [27].Lindenberg et al. [28] stressed the importance of KPIs in performance monitoring in the industry because they can help identify poor performance and, thus, create improvement potential.Meier et al. [27] explained the role of KPI within the assessment of the delivery of industrial service.

C. Interactive Assessment Systems
Interactive assessment systems have industrial applications using AR for remote maintenance [29], production and quality monitoring on the shop floor [30], and cross-platform dashboards for assembly operations [31].Written documentation is popular on the shop floor while performing troubleshooting; however, the operator must find the proper terms that identify the problem and then search for a suitable solution [32].Online documentation eases this problem when available.However, the assessment provided to the operator is a set of statements rather than a list of recommendations with an associated confidence level.An interactive recommendation system would support the operator in the decision-making process on the shop floor.Segovia et al. [30] investigated the effectiveness of an AR-based interactive system to decrease defects on the shop floor, where the AR implementation assisted in the improvement of quality reporting and decision making while displaying necessary information to the user.Additionally, Hoffmann et al. [7] demonstrated the effectiveness of using the AR device HoloLens, as a tool in a cyber-physical system (CPS) for knowledge and expertise sharing in manufacturing.This study discussed the importance of visualization and interaction of knowledge transfer between knowledgeable persons and knowledge seekers through CPS.Thus, gamification positively contributes to the knowledge transfer process.Mourtzis et al. [29] presented another method for interactive assessment using AR remote assistance.The user would be able to contact remote experts for recommendations that were presented through AR scenes.This implementation managed to reduce travel costs and downtime.Kokkas et al. [33] used holograms in an AR application to test new plant layouts as a method for interactive assessments.This paper stresses the importance of having a real environment, stating that it allows a realistic assessment of solutions based on a quantitative and qualitative approach.
Our approach is distinct from the known state-of-the-art approaches in three ways.First, we propose a holistic approach to managing the overall knowledge chain.Specifically, we concentrate on the quantification of knowledge uncertainty, as well as its inclusion in a knowledge-based model based on primitive recursive functions.Second, we propose a strategy for embedding a knowledge transfer framework into an interactive assessment system hosted in the backend.Third, we demonstrated the industrial plausibility of this approach using an industrial laboratory testbed that is comparable to industrial setups.

III. KLAFATE: KNOWLEDGE TRANSFER FRAMEWORK AND EVIDENCE THEORY
This research proposes a user-centered approach to gather process expertise from the shop floor using a KnowLedge trAnsfer FrAmework using evidence ThEory (KLAFATE).The system architecture is shown in Figure 2, which portrays the knowledge flow from its tacit form to an explicit (digitized) version.KLAFATE comprises two major sections: knowledge update and the operational system.
The knowledge update section considers all necessary steps to acquire knowledge from the first time, as well as every time the knowledge needs to be updated.The stages of knowledge update are summarized as follows: • Extraction of tacit knowledge and uncertainty quantification: The tacit knowledge from process assets is transformed into an explicit form using an extended version of the causal method failure mode and effect analysis (FMEA).The knowledge was extracted from an expert panel and written in the templates of the extended FMEA.The expert panel quantified knowledge uncertainty by defining weights for each rule.Thus, each rule weight is a function of predefined criteria: w R = f (c 1 , c 2 , c 3 , ..., c n ), where c 1 , c 2 ... symbolize the criteria.• Knowledge modeling and validation: Knowledge rules are transformed into a knowledge model using primitive recursive functions.Thus, the system can be represented as a switch case where each case is a knowledge rule.The knowledge rules need to be validated regularly according to the criteria.The criteria consider a data-based method that uses the KPIs of the system to validate the rule.• Interactive assessment systems: The system interacts with the user through a visual interface.This interface displays the assessment while experimenting a fault and allows the operator to input feedback to the system in terms of system usability.
The operational system receives artifacts from the knowledge update section, namely, the knowledge model, rule weights, knowledge validation strategy, and AR application.
A. Theoretical Background 1) Boolean Logic Rules: A rule can be written as Figure 2: Knowledge Framework Overview where R i is the i th knowledge rule, P j is the j th operand, O j is the j th operator, and i, j, M ∈ N. The operator O j is a logic operator (e.g., ≤, ≥, =, =, ∨, ∧, ¬, etc.).
The j th operand P j can be represented as a function of process variables V and process thresholds T as follows: Successively, the operand P j can be represented using: where P j is the j th operand, P k the k th operand, and O k the k th operator, and j, k, N ∈ N. Thus, the operator P j could take one of the following forms: where C k is a condition that is a function of the process variables V and process thresholds T : Thus, the condition C k could take one of the following forms: The knowledge rules return a Boolean output, which signalize that a knowledge rule is active (e.g., the first two examples in Equation ( 6) return a Boolean output).In the case of the rule weights, the output is a real number in the range [0,1] (e.g., in the third example in Equation ( 6), an if-else statement returns a real number).

2) Dempster Shafer Evidence Theory:
Definition 1 (Dempster-Shafer [34]).Let Θ be a frame of discernment, in which each focal element represents a condition.A basic probability assignment (BPA) can be defined using a function m: 2 θ → [0, 1], whenever: Thus, considering a frame of discernment Θ = {A, B}, the power set 2 θ is represented by: The sum of the BPAs from ( 8) can be transformed into: where m j is the j th focal element of Θ, and j, n ∈ N.
The elements of Θ are considered mutually exclusive.For example, given a Θ = {A, B}, a combination of focal elements is not possible: A BPA describes the certainty of each focal element (e.g., a condition, a fault).Considering the weights of each focal element can help while quantifying the overall uncertainty.To this end, this paper presents a new weighted S bpa , denoted as S wbpa , that describes the overall uncertainty of a BPA by using the weights of each BPA.
Proposition 1.The sum of BPAs from Equation ( 10) can be transformed into: where w mj is the j th confidence weight of the BPA m i , and U is the overall uncertainty.The confidence weight w mj represents the confidence level of the evidence m i , which can be quantified using a predefined criteria.
The overall uncertainty of the body of knowledge can be represented as: where the value of U will increase as the confidence weights of the focal elements of Θ decrease.Thus, a large value of U corresponds to a high uncertainty in the body of knowledge.
In this sense, the overall uncertainty U represents the amount of unknown information or the lack of evidence.At least one of the focal elements of Θ is different from zero: Definition 2. Each confidence weight w mj is bounded: Knowing the value of the overall uncertainty, we could assess the confidence in the available evidence.Therefore, Proposition 1 paves the way to obtain an overall uncertainty measurement considering the confidence weight of each piece of evidence.However, the integrity of Equation ( 12) (sum of BPAs) should be preserved.For this reason, Proposition 1 must be consistent with Equation ( 12) by mathematical proof.Lemma 1. Denote the S bpa and S wbpa as the BPA sum and BPA weighted sum with an explicit overall uncertainty definition, respectively.Then, it holds: Proof.Considering each weight w mj → 1, then U = 1 − n j=1 m j , and if Equation (10) holds, U = 0. Hence, (10) is fulfilled as both sides equals to one.Likewise, considering each weight w mj → 0, then the term n j=1 m j * w mj tends to zero, and U = 1, thus, both sides equal to one.The first scenario represents a total certainty on the provided evidence, which in turn result in U = 0. Whereas the second scenario represents a total uncertainty on the provided evidence, which results in U = 1.Any other case in which w mj → [0, 1] will result on the condition equal to one, due to the mutual cancellation of n j=1 m j * w mj .
B. Extraction of Tacit Knowledge and Uncertainty Quantification 1) Knowledge Extraction: The lean manufacturing tool failure mode and effect analysis (FMEA) is extended for use as a causal method to transfer tacit knowledge from the shop floor into an explicit form, which can be easily modeled as knowledge rules.The FMEA is built by an expert panel from the process, and identifies the failure modes, possible causes, and recommendations from a determined system.This research uses an extended FMEA to extract the knowledge into a digital format.
The knowledge tuple T U has the form: where, i ∈ N.Each knowledge tuple has only one associated rule R and only one rule weight w R .Each failure mode F M is associated with one process P and one sub-process SP .Each F M can have several causes C, effects E, and recommendations RE.Knowledge rule R can be used for process optimization or troubleshooting purposes.This article proposes an improved version of the extended FMEA from [26], which consists of a spreadsheet with four templates: settings, weight update, system and component.Moreover, this study formalizes the previous approach mathematically, allowing further improvements and modifications.
The extended FMEA provides a framework for establishing knowledge rules using templates.These rules are used to detect faults in the system, and they follow the criteria defined in the Controls -Diagnosis of the traditional FMEA.The rules were written in a programming-friendly manner to make information parsing manageable.Each rule contains a formula for detecting a failure mode, which is a function of process variables V and process thresholds T .This formula can be more detailed using sub-rules.
The previous procedure is illustrated graphically in Figure 3, which shows the relationships between the templates.
The template settings included the thresholds and system set points in one place (see Table IV).This is implemented such that the variables can be changed easily without the hassle of changing individual variables in different templates.This template included a section for the team, system, and component.Team thresholds change variables of the template weight update, whereas the system thresholds and component thresholds affect the templates system and component, respectively.The template weight update contains information to quantify the uncertainty of the knowledge rules as confidence weights.The confidence weights used the criteria specified by the expert panel.
2) Uncertainty Quantification of the knowledge rules: The uncertainty of the knowledge rules can be quantified using confidence weights for each rule.This implies finding criterion that can represent the uncertainty of the rule so that the weight for the rule can be defined.The weight of the knowledge rule will have a value in the range [0, 1], and it was used to assess where V is the process variable, T is the process threshold, w R is the rule weight, w R C i is the i th criterion for the rule weight, and N R ∈ N. The criteria for the confidence weights of the knowledge rules are found in the template weight update.It is important to note that the expert panel can define the extent of the rule criteria, so these criteria could be composed of one or several criteria.Each weight w R can contain a N R number of sub-weights w R C i .The expert panel defines the criteria to conform to each of these w R C i , which is a function of the variables V and thresholds T in the template.However, this research uses three main criteria to conform to the weight of a rule: the weight of the expert panel w R C 1 = w P , weight of the KPI compliance w R C 2 = w K , and weight of the User Rating w R C 3 = w U .Once the system is in operation, the confidence weights can be updated dynamically using the historical values.The accumulated value of the weight w Ra can be calculated as: The weight of the expert panel w P is defined using: , where N R , N R A ∈ N, and V t represents the team variables, T t the team thresholds from the template settings, and w Mi represents the weight of the i th member of the expert panel.The weight of the expert panel w Mi is defined using: where N M , j, i ∈ N, and w M C j is the weight of the j th criteria C j to evaluate the members of the expert panel.
The weight of the KPI Compliance w K is defined using: where i, N K ∈ N, w K C i represents the confidence weight for the KPI, K Ci represents the current KPI calculation, and K Ti is the target or estimated KPI for the machine performance during the member working time.The expert panel defines K Ci and K Ti , where K Ci is calculated using online machine data, an K Ti is set by the team.The weight of the user rating w U is defined using: where U S is the user satisfaction in the range [0, 1].
The prior weights for the knowledge rules are composed solely of the expert panel weight, thus: Fig. 4 shows an overview diagram of the confidence weights.The weight for each rule will have a value in the range [0, 1].Though the confidence weight can provide the certainty of the active rule, there is no assessment of the overall uncertainty, particularly for knowledge rules that are under the evaluation of acceptance.The Dempster Shafer Evidence Theory (DSET) can support the modeling of overall uncertainty.
The knowledge rule, when triggered, will have a value defined in: Having a triggered rule R i : where R i is the i th focal element of Θ, and i ∈ N.
Since the system only triggers one knowledge rule at the time, the BPA weighted sum S wbpa is represented as: where ∀R j = T rue, m Rj = 0.The S wbpa does not take in consideration neither the other focal elements (e.g., knowledge rules that are not active) nor the associated weights.The consideration of the weights of each focal element can improve the quantification of the overall uncertainty.To this end, we present an approximation for the S wbpa called S awbpa that considers all the focal elements using a sensitivity to zero approach.Cheng et al. [35] proposed the use of sensitivity to zero when building evidence, which approximates the zero and one values to nearly-zero and nearly-one, respectively.
Remark 1.The approximation factor k enhances the evidence definition because all the focal elements are considered, even if these values are nearly zero [36]: where k ∈ R, and F ∈ N.
Proposition 2. Using the approximation factor k from (28), the BPA weighted sum from ( 12) is transformed into: where m Rj is BPA using the approximation factor k, and is represented as: Assumption 1. Considering a factor F such as: F 1 , and therefore k → 1.
Similar to Proposition 1, the integrity of Equation ( 12) must be preserved when applying Proposition 2. Therefore, Proposition 2 must be consistent with Equation ( 12) by mathematical proof.
Lemma 2. Denote the S bpa and S awbpa as the BPA sum and approximated BPA weighted sum with an explicit overall uncertainty definition, respectively.Then, it holds: Proof.Assuming a factor F 1, thus, the approximation factor k → 1, and therefore, the BPA of the active rule R i will tend m Rj → 1; whereas, the BPA of the inactive rules will tend to zero.As a result, R i * w Ri +U = R i * w Ri +1−R i * w Ri , which equals to one, satisfying, thus, Equation (10).
The BPA m Rj can be transformed into an array form for posterior calculations [37]: where R j and w Rj are the j th element of Θ for the rule and confidence weight, respectively; U is the overall uncertainty, and j, n ∈ N.

C. Knowledge Modeling and Validation
1) Knowledge Modeling: Now that we have extracted the knowledge using the causal method FMEA, this knowledge can be used as a knowledge model by formalizing the rules as primitive recursive functions.Kleene [38] defined that "a function ϕ is primitive recursive in ψ 1 ,...,ψ l (briefly Ψ), it there is a finite sequence ϕ 1 ,...,ϕ k of (occurrences) functions (called a primitive recursive derivation of ϕ from Ψ) such that each function of the sequence is either one of the functions Ψ (the assumed functions), or an initial function, or an immediate dependent or preceding functions, and the last function ϕ k is ϕ".[38]).Kleene defined the switch case function as "a set of predicates Q 1 ,...,Q m is mutually exclusive, if for each set of arguments not more than one of them is true.#F.The function ϕ defined thus where Q 1 ,...,Q m are mutually exclusive predicates (or ϕ(x 1 , ..., x n ) shall have the value given by the first clause which applies) is primitive recursive in ϕ 1 ,...,ϕ m+1 , Q 1 ,...,Q m ." The knowledge rules from the extended FMEA were defined as functions of process variables and process thresholds: where V 1 , ..., V n represent the process variables, T 1 , ..., T n are the variable thresholds used in the knowledge rules, and j, n ∈ N. The rules extracted by the extended FMEA are mutually exclusive.The mutual exclusivity property of the rules satisfy the condition of the switch case function of Kleene.Thus, the knowledge rules can be represented with the function L R (to simplify the equations the term (V 1 , ..., V n , T 1 , ..., T n ) will not be written) : where R 1 , ..., R m are the knowledge rules, L R1 , ..., L Rm represent the labels correspondent for each rule, L Rm+1 is the exit clause, and m ∈ N.
The knowledge-based model can also be represented as: Where the transformed rule L T R j was defined using Equation (28): otherwise The next step is the integration of the knowledge-based model and the uncertainty of each rule to determine the confidence level of the rules.The previous section defined the uncertainty as a confidence weight for each rule using Equation (18): Having a triggered rule R j : with its corresponding confidence w Rj , it provides a relevant assessment of the process, however, the overall uncertainty of the body of knowledge remains unknown.Knowing the uncertainty could provide a perspective on the overall confidence.Therefore, the overall uncertainty U of the knowledge-based model for the current triggered rule R i must be calculated.For this purpose, the rule R i is transformed into a set of evidence m Rj using the equation (11): Where the term L w R j can be represented as: Thus, the overall uncertainty U is represented using the equation ( 13): 2) Knowledge Validation: Having a knowledge model containing explicit knowledge in the form of rules, the next step is to define a validation strategy to evaluate their performance.For this purpose, the knowledge rules are validated using the KPI calculation for a period of time.Thus, the validation of knowledge rule R j is represented by: , where i, N V ∈ N. w K C i represents the confidence weight for the KPI, K Ci represents the current KPI calculation, and K Ti is the target or estimated KPI for the knowledge rule.The next step was to compare the validation results of the K V R j with a threshold to approve the rule, if successful.In this study, the rules are evaluated on short-term and long-term bases.The short-term basis evaluates the acceptance of a new knowledge rule; whereas, the long-term basis evaluates the long-term effects.

D. Interactive Assessment System
This subsection provides considerations from the software engineering side for the deployment of the KLAFATE as a backend, as well as the user interface using the augmented reality device HoloLens.Software development followed an agile methodology, as the issues were defined and grouped in working package sprints for a 2-weeks time slot, keeping a backlog for future tasks.The first challenge is to define software requirements.For this purpose, the backend and HoloLens are addressed first separately, and second, it is addressed as a system.In the first step, the backend must fulfill the following tasks: • Communication to the machine and to the HoloLens • Data collection of the machine • Knowledge Extraction through FMEA parsing • Uncertainty Quantification of the Knowledge Rules • Sending assessment messages to the HoloLens • Receiving user rating and report • Updating rule weights • Calculating the time response of the system and communications The HoloLens needed to fulfill the following tasks: • Display the assessment provided by the backend • Request a report from the user in case no effective diagnosis is available • Request a user rating • Provide a voice command to enhance the user experience Having defined the tasks for the backend and HoloLens, it is possible to sketch a sequence diagram that shows the interactions between the backend and HoloLens.As shown in Figure 5, the major actors are the machine, backend, and HoloLens.However, the backend has main modules for communication using OPC-UA and MQTT for reading and writing, an MQTT broker, a parsing module to extract information from the FMEA, build the knowledge model, and the main function.The next step is to define the flow diagrams and pseudo-codes to identify the modules and functions.We considered the use of Git, a version control system, to work collaboratively and keep track of software changes.Finally, we addressed the hardware, for which we considered a laptop as a device for software development and testing, and HoloLens as the user interface.Having tested the functionality of the backend, it is possible to use different hardware setups to host the backend, such as a cloud platform, locally on a server, or even as edge computing (e.g., using an industrial PC on the shop floor).

IV. USE CASE: APPLICATION OF THE KLAFATE IN A LABORATORY BULK GOOD SYSTEM
This section describes the practical implementation of KLAFATE, as well as the test results using a laboratory testbed consisting of a small-scale bulk good system (BGS).This section is divided into the following subsections: BGS description, implementation of KLAFATE, results, and discussion.An overview of this use case is shown in Figure 6.The backend hosts the KLAFATE and provides the communication interfaces OPC-UA and MQTT, which are used for the communication of the BGS and augmented reality device HoloLens, respectively.
The backend was developed using Python 3.8 in the IDE Anaconda.The backend was tested on a laptop with an Intel Core i7, 32GB RAM, and 475GB HDD.The backend was run in Windows 10 64-Bit.The interactive user assessment system Figure 6: Overview Diagram for the Use Case consists of a backend and augmented reality (AR) device as the user interface.We chose Microsoft's HoloLens 2 as the AR device for user experience (e.g., holographic support, voice command, head/eye/hand tracking, and customized programming for MQTT communication).The HoloLens specifications include a Qualcomm Snapdragon 850 processor with 64-GB of storage and 4-GB DRAM memory running the Windows holographic operating system.The software Unity 3D was used to develop the AR application.This is a cross-platform game engine developed by Unity Technologies.Unity uses the C# programming language for software development.

A. Bulk Good System Laboratory Plant
The BGS is a discrete process comprising four stations: loading, storage, weighing, and filling.The BGS uses plastic pellets as bulk goods and possesses common components of a large-scale industrial BGS (e.g., conveyors (motor, vacuum), silos, valves, weighing and dosing stations, and an automation platform).Figure 7 shows the stations in the BGS.

B. Experiment Design
The experiment illustrates the application of KLAFATE in a small-scale industrial testbed.To pursue the experiments, it is necessary to perform a setup procedure in the BGS, KLAFATE, and expert panel.
The expert panel setup included two experts, each with at least 1-year of experience working with the Bulk Good System, and one apprentice with no experience in the machine.The panel discusses and proposes a troubleshooting program and new recipes to optimize the process.These recipes are a collection of machine parameters that allows the machine to achieve the best KPIs.The years of experience of the experienced worker have been exaggerated for illustration purposes in the experiments (see Table VII).The BGS setup consists of the initial conditions for the stations, such asthe machine parameters, product weight, and compressed air pressure.In addition, before every experiment (e.g., testing a process recipe), the silos were filled to 90% of its capacity.
The KLAFATE setup defines the constants in the settings template.These constants are the thresholds for the rules and confidence weights.Thresholds are grouped into team, system, and component.Some of the thresholds for the team, system, and component are listed in Tables VIII, IX and X, respectively.According to the experiment, the calculation time for KP I Compliance was set to 10min, 20min, and 30min .

C. Implementation of the Proposed Methodology
The KLAFATE methodology was applied in two stages: offline and online.Data collection supported the offline stage.The main script supported the online stage.Around these two scripts, several scripts provide services such as OPC-UA and MQTT communication, fusion functions, parsing functions for the FMEA templates, and auxiliary functions.In addition, an MQTT broker allows communication between the backend and HoloLens.The offline stage uses the backend data collection module, in which the console is used as the user interface.The collected data are used for the uncertainty quantification of a failure mode and to validate process knowledge (e.g., process recipes).The online stage or interactive assessment system consists of the backend and augmented reality application in the HoloLens.To operate the KLAFATE, the expert panel needs to complete the following steps: filling up the FMEA templates, initializing the BGS, and running the backend and HoloLens application.The backend contains the MQTT broker for communication to the main script and HoloLens and the main script.The pseudo-code of the main script is displayed in Algorithm 1.
The HoloLens application contains an interactive assessment application that provides information regarding triggered failure modes.The HoloLens application provides different (internal) services, such as voice commands, recognition of hand gesture, and customized programmed services (e.g., MQTT client, state machines, and handshake communication with the backend).The system latency was also calculated using the backend.

D. Results
This section shows the functionality of KLAFATE: a data collection script for data analysis (e.g., validation of new recipes and uncertainty quantification), data storage of the system time response, and an interactive assessment system through the HoloLens and the backend.
1) Example using a Failure Mode at the System Level: This section provides an example of KLAFATE.For this purpose, we chose a failure mode (FM) at the system level, specifically, low quality status.Table XI lists the extended FMEA at the system level for this FM, whereas Table XII lists the extended FMEA at the component level.
The rule low quality status is built using the logic presented in Equations ( 1) - (6).Defining the active rule low quality status at the system level (see Table XI for the system FMEA) using Equation (1): where C 1 and C 2 are defined in Table XI, and the causes and recommendations for the active rule low quality status are provided by the active FM at the component level (see Table  if F M S then 13: normal ← F alse 18: transform evidence E F M S by Eq. ( 28)-( calculating overall uncertainty U E F M S 23: sending message using MQTT to HoloLens update weight w R of the FM by Eq. (18) 39: return Where C 1 -C 4 are defined in Table XII.The weight of the rule low quality status can be defined using Equations ( 18), ( 20)-( 22): Likewise, the weight of the rule, it can be modeled using the previous procedure.Fig. 8 shows the overview diagram of the confidence weights for the use case.
Table VII shows the team setup, where three operators op 1 , op 2 , and op 3 constitute the expert panel (N P = 3).The weight panel w P can be calculated using the equations ( 20), (21): Each operator weight w Mi is calculated using the criteria w M C j : years of experience in general E G , years of experience in machine E M , and individual performance K A calculated from the KPIs waste w and production rate p (see Table VII).Where each operator weight w Mi is defined using Equation ( 21): The formulas for w E G , w E M , and w K A are defined in Table XIII.Thus, the weight for the years of experience w E G can be represented using: where C 1 , C 2 , and C 3 are described in Table XIII.
The weight of the years of experience in the machine w E M can be represented using: where C 1 , C 2 , and C 3 are described in Table XIII.
Finally, the weight for the KPI performance w kpm can be represented using: Thus, the weight panel w P can be calculated as: (w op1 + w op2 + w op3 ) = 0.88 + 0.75 + 0.5 = 0.71 Using the machine availability K ma as KPI (N K = 1), and assuming that the assessment solved the problem (K ma = 1, If the problem has no appropriate diagnosis or cannot be solved, KPI is K ma = 0. Assuming a satisfied operator (U S = 0.8), the user rating weight can be calculated using (23): Thus, the weight for the rule R LQ is calculated using: Assuming that the FMEA system has three FMs: low quality status (LQ), high quality low production status (LP), and high quality normal production status (NP), the knowledge-based model can be represented using equation (36): The weights of the rules w R LP and w R N P will be assumed to be the prior weights using (24): The knowledge-based model triggers rule R LQ , which can be transformed into L w R using (37): where L T R LQ is defined using Equation ( 14): Assuming an F = 2, the approximation factor k is calculated using equation ( 12): Thus, since R LQ is active, L T R LQ yields: The rest of the rules are calculated using: The uncertainty of the system is calculated using the evidence theory, where the power set is represented using Equation ( 9): Θ = {LQ, LP, N P } The rule L w R can be transformed into a set of evidence m R LQ using the equation (11): which can also be represented as: where the overall uncertainty U is represented using the equation ( 14): = 1 − (0.99 * 0.84 + 0.005 * 0.71 + 0.005 * 0.71) = 0.16 Thus, the set of evidence is calculated as: = [0.99* 0.84 0.005 * 0.71 0.005 * 0.71 0.16] = [0.830.003 0.003 0.16] Thus, the active rule R LQ has a confidence level of 83%, and the overall uncertainty lies by 16%.
2) Uncertainty Representation: The weight w R represents the uncertainty of the rule R (e.g., a process recipe).The rule was evaluated at intervals of 10min, 20min, and 30min.Assuming a steady process, synthetic data are created to illustrate the change in the weight over time.For this purpose, the panel weight w p was assumed to change twice a year, assuming a regular evaluation (e.g., the operators received training).The production rate was assumed to be steady with an average value of 3.5 prod/min.However, external disturbances (e.g., material shortages or pressure decay) were considered during April and September, as shown in Fig. 9.These fluctuations in the production rate also influenced the user rating weight w u , which is, in this case, the user satisfaction that was not fulfilled (e.g., the estimation of the KPI was not reached).The weight w R follows fluctuations in the production rate, as shown in Figure 9.In contrast, the accumulated weight w Ra has a steadier trend absorbing the disturbances.3) Knowledge Validation: The scenario illustrates the machine operation by an inexperienced operator and using KLAFATE.The KPI under observation is the production rate PR measured in [prod/min].This experiment was conducted using different time slots: 10, 20, and 30 min for the KPI calculations.The data collection script is used to evaluate the performance of the machine by comparing the current production rate with user estimation.The experiment began with a steady or normal condition on the machine.This machine condition corresponds to the label NP or normal production, which corresponds to an average production rate of 3.4 prod/min for the 30 min time slot, as shown in Figure 10a.An inexperienced operator sets a new recipe X1 in the machine with an estimated production rate of 4 prod/min, corresponding to an improvement of 18%.This recipe yielded a production rate of 2.9 prod/min.The recipe did not reach the estimation or the current normal production rate NP; therefore, the recipe was discarded and the operator loaded the previous recipe NP.The expert panel suggested a new recipe X2 in the machine with an estimated production rate of 4.2 prod/min, which corresponds to an improvement of 23%.Observing the plot in Fig. 10a, using the new recipe fulfills the estimation using a time slot of 10min with a moving average of five samples.However, evaluating the time slots for 20 min and 30 min (Figures 10b, and 10c respectively), the production rate decays.The reason for this decay relies on the silo levels, which cannot be filled by the selected suction time of 3s.This effect can only be observed over a long evaluation period.Thus, using a time slot of 30 min, the production rate did not fulfill the estimation.However, the new recipe yields a better production rate than the current recipe.Nevertheless, to approve a new recipe, a new analysis should be performed using a longer time slot.
The second experiment, as seen in Figure 11, begins with a low production LP condition on the machine.This condition corresponds to an average production rate of 3.2 prod/min.Similarly, an inexperienced operator sets a new recipe X3 to improve the production rate; however, this also causes the production rate to dip, as seen in the previous experiment.In contrast, this experiment changes the recipe to NP after detecting the failure of the recipe X3.This recipe resulted in an increase in the production rate by 6%.The expert panel then suggests a new recipe X4 in the machine, with an estimated production of 4 prod/min.Similar to the previous experiment, the initial estimate was achieved.However, as it progressed towards 30 minutes, the production rate started to decay.The cause of this decay is similar to what was discussed in the first experiment.As a result, another recipe X5 was suggested by the expert panel, where it manages to stop the decay and slightly increase the production by 6% compared to X3.
A one-way ANOVA test was conducted to determine whether there were significant differences between the production rates of the recipes.An initial null hypothesis and an alternative hypothesis were set, where the null hypothesis stated that all setpoints yielded the same production rates.By contrast, the alternative hypothesis contradicts this by stating that there is a difference between production rates.This test was done on the three recipes.Given an alpha of 0.05, the resulting p-value was 3.12 × 10 −23 , indicating that the null hypothesis should be rejected.Therefore, it can be concluded that the three recipes did not yield the same production rates and had significant differences.
4) Time Response of the System: The time response of the system is evaluated using different perspectives: the communication between the backend and HoloLens, internal backend cycle, and time required for failure mode detection and user assessment.The latency of the MQTT communication had an average of 1s; however, the last trials had an average of less than 1s.This time is required to send the assessment message from the backend to its reception in the HoloLens.The time required from failure mode detection until visualization on the  HoloLens had an average of 5s.The internal cycle time of the backend had an average of 10s.This cycle time also depends on the user interaction, which implies that the user can influence this time measurement (e.g., an inexperienced user would require additional time to evaluate the recommendation).
5) Interactive Assessment System using the backend and the HoloLens: The backend collects data from the background and evaluates knowledge rules.HoloLens runs an internal loop and remains on standby until it receives a message from the backend.The scenario started with an inexperienced operator wearing HoloLens.The BGS operates in a normal condition, and thus the HoloLens displays no fault, see Fig. 12a.
The compressed air pressure valve is closed, which triggers a system failure mode with low quality at the backend.The backend sends an assessment message to HoloLens containing the following information regarding the failure mode (e.g., description, effect, causes, and recommendations).In addition, it contains the weight of the failure mode and overall uncertainty of the system.This uncertainty was calculated by transforming the active failure mode into a set of evidence.HoloLens displays the assessment message, as shown in Figure 12b.The causes and recommendations of the system failure mode are the associated failure modes at the component level.Thus, a system failure mode triggers more than one component failure mode.The operator uses the voice command to select the next cause/recommendation pair, saying Next (e.g., in the case of several causes/recommendations) or solved (e.g., in the case where the failure mode has been addressed).After the failure mode has been categorized as solved, HoloLens displays a summary of the current failure mode and requests a user rating to rate the user satisfaction from one to five stars.In the case where there is no available diagnosis, which means that the HoloLens reached the last pair of causes/recommendations, the HoloLens requests a report from the user, as shown in Figure 12d.As displayed on the screen, the HoloLens requests an error report from the user.The backend receives a message with either the solved or report status, and it assigns the KPI compliance as 1.0, or 0.0, respectively.The backend updates the weight of failure mode w R using the expert panel weight w P , KPI compliance weight w K , and user rating weight w U .

E. Discussion
KLAFATE presented a way to formalize tacit knowledge and integrate it into an interactive assessment system.Remarkable features are the uncertainty representation of knowledge, validation of knowledge rules, and implementation of the framework into a small-scale industrial testbed.The data collection module allowed us to quantify knowledge uncertainty and validate new process recipes.The limitations of this approach include multi-fault scenarios at the system level.Currently, only mutually exclusive faults are explicitly addressed.Addressing a scenario with simultaneous faults requires special treatment based on evidence theory, in which a combination of faults is considered.Consequently, the focal elements of the evidence increase to 2 F aults .The knowledge model does not consider the historical nature of the fault, which means that the model cannot handle time-series data.This scenario can be addressed using a hybrid system composed of a current knowledge model and a machine learning model trained with time series.Uncertainty quantification is based on the criteria given by the expert panel's weights, KPI analysis and user ratings.This uncertainty assigns a confidence level to the triggered operational rule of the knowledge model.Knowledge validation was performed using KPI compliance, specifically production rate.A typical industrial process include several KPIs to validate the process recipes (e.g., delay, machine availability, quality, and energy consumption).The methodology of KLAFATE and its implementation opens a discussion on the importance of user-centered approaches, especially in knowledge transfer and knowledge applicability on the shop floor.

V. CONCLUSIONS
This research demonstrates how an interactive knowledge transfer framework can support the task of transforming tacit knowledge into explicit knowledge.The knowledge-based model was the outcome of this transformation, and was integrated into an interactive assistance system that could support the operator on the shop floor.In addition, DSET quantified the uncertainty of the acquired knowledge, which was visually reflected in the results.The knowledge transfer framework provided a clear methodology for integrating uncertainty with the rules generated for the knowledge-based model.The findings of this research would stimulate the discussion of how to transfer knowledge from the shop floor into a more institutionalized version, specifically, as a knowledgebased model embedded into an interactive assistance system.Furthermore, this novel methodology extracts expert domain knowledge that can be widely used in other disciplines that rely on expert feedback.The use of the DSET presents a new method for quantifying the uncertainty of expert knowledge.The integration of DSET and the knowledge-based model provides more reliable support to the operator, as it provides the assessment with a degree of certainty, which means that the operator can still use her own expertise to make the final decision.The uncertainty plot helps in the decisionmaking process when validating a new body of knowledge, specifically when adopting a new recipe or set of setpoints.The validation plot portrayed KPI behavior while using a new body of knowledge, specifically new recipes (e.g., bad recipes yielded low KPIs, which consequently led to discarding the new recipe, whereas high KPIs encouraged the adoption of the recipe).The KLAFATE application presented an early adoption of the knowledge framework in an industrial setup.The demonstration provided a detailed sequence of the steps to be followed, as well as the results obtained after each step.
Although the present study provided a holistic approach to managing the knowledge chain through an interactive knowledge transfer framework, new questions arose during the development of this research.These new questions rely on the human nature of the information, specifically intrinsic bias, while extracting knowledge.This bias can play a significant role during the selection of knowledge to be included in the model and during the selection of criteria to quantify the uncertainty.These limitations must be addressed to adopt the knowledge framework into a fully automatic scenario.To this end, further research could explore new knowledge extraction strategies and methodologies to quantify uncertainty.Finally, knowledge internalization is a prospective line of research that should address the internalization institutionally and at the operator level.

Figure 4 :
Figure 4: Overview of Confidence Weights

Figure 5 :
Figure 5: Sequence Diagram of the Interactive Assessment System

Figure 7 :
Figure 7: BGS.From left to right: loading, storage, weighing, and filling

Algorithm 1
XII).Thus, the active FM no vacuum pump at the component level can be defined using Equation (1):R i = (P j O j P j+1 ) = P j O j P k O k (P k+1 ) O k+1 P k+2 O k+2 P k+3where:P j = (C1and C2 and C3 ) O j = or P j+1 = P k O k (P k+1 ) O k+1 P k+2 O k+2 P k+3 where: P k = C 1 O k = and P k+1 = C 2 O k+1 = and P k+2 = not C 3 O k+2 = and P k+3 = C 4 Thus, R i can be represented as: R i = C1 and C2 and C3 or C1 and C2 and (not C3) and C4 Backend w R ← w Rpr by Eq.

Figure 8 :
Figure 8: Overview of Confidence Weights

Figure 9 :
Figure 9: Production rate against uncertainty representation for the rule weight

Figure 10 :Figure 11 :
Figure 10: Recipe Validation Experiment 1 (a) HoloLens under normal conditions (b) Fault detection (c) User rating (d) No diagnosis and report

Figure 12 :
Figure 12: Interactive User Assessment using Augmented Reality

Table I :
List of Symbols and Abbreviations

Table II :
FMEA Template

Table III :
Extended FMEA Template

Table IV :
Settings Template

Table V :
Table V summarizes the list of setpoints for each station.Table VI lists the variables used in data collection.BGS Operation Parameters of the stations Loading, Storage, Weighing, and Filling.

Table VI :
BGS Variables of the stations Loading, Storage, Weighing and Filling for data collection. VariableLoad.Stor.Weigh.Fill.

Table VII :
Expert Panel from the Template Profile

Table VIII :
Team Thresholds from the Template Settings

Table IX :
System Thresholds from the Template Settings

Table X :
Component Thresholds from the Template Settings

Table XI :
Recipe Validation Experiment

Table XII :
Extended FMEA at the component level

Table XIII :
Update of Rule Weights

Table XIV :
and C 5 are described in Table XIII.The results are summarized in Table XIV.Dynamic Confidence Weights for the Expert Panel