A Mathematical Approach to Design Verification Strategies That Incorporate Corrective Activities as Dedicated Decisions

System verification activities (VAs) are used to identify potential errors and corrective activities (CAs) are used to eliminate those errors. However, existing math-based methods to plan verification strategies do not consider decisions to implement VAs and perform CA jointly, ignoring their close interrelationship. In this article, we present a joint verification–correction model to find optimal joint verification–correction strategies (JVCSs). The model is constructed so that both VAs and CAs can be chosen as dedicated decisions with their own activity spaces. We adopt the belief model of Bayesian networks to represent the impact of VAs and CAs on verification planning and use three value factors to measure the performance of JVCSs. Moreover, we propose an order-based backward induction approach to solve for the optimal JVCS by updating all verification state values. A case study was conducted to show that our model can be applied to effectively solve the verification planning problem.


I. INTRODUCTION
System verification is defined as a process that evaluates whether a system or its components fulfill their requirements [1]. System verification is often planned and implemented as a set of verification activities (VA), which can be executed at different developmental phases and on different system configurations [2]. A verification strategy is often designed to fulfill three objectives: "maximizing confidence on verification coverage, which facilitates convincing a customer that contractual obligations have been met; minimizing risk of undetected problems, which is important for a manufacturer's reputation and to ensure customer satisfaction once the system is operational; and minimizing invested effort, which is related to manufacturer's profit" [3].
In current practice, a verification strategy consists of VAs, each of which is planned for a system configuration at a specific developmental phase, and corrective activities (CA), which are identified reactively. A verification strategy can be presented as an acyclic tree-shape graph where each node represents a VA [4]. Strategy planning methods, including the decomposition approach [5], set-based design method [6], parallel tempering method [4], and reinforcement learning method [7], have been proposed to design verification strategies. All of these methods only consider VAs as dedicated decisions, whereas CAs are simplified as default actions or even ignored during verification planning. This simplification may undermine the value of the resulting verification strategies because of the potential suboptimality of CAs. In our previous work [8], a hybrid verification and correction framework was proposed to integrate CAs into verification strategies as dedicated decisions. The framework enables verification planning to be extended to include both VAs and CAs as a result of independent decisions. However, the interactive influence that one type of activity has on the other type over a verification process (i.e., the execution of the verification strategy) was left for future work. Such an aspect is central to this article.
Incorporating CAs as dedicated decisions brings two challenges. First, the interactive influence between VAs and CAs makes it necessary to extend the paradigm of verification planning that defines the procedure of verification processes. That is, a standardized decision-making model for various types of activities is lacking in the context of system verification. The extension of the paradigm requires more detailed analysis of assumptions and constraints of verification processes. Second, because VAs and CAs are executed to achieve different goals, each of them has its own activity space and activity results. This difference causes the set of feasible activities to change along the verification process, making verification planning dynamic. This dynamism increases the complexity of verification planning because the activity space cannot be predetermined at the beginning of planning.
This article presents a joint verification-correction model (JVCM) to solve the two challenges. Verification planning is modeled as a sequential decision-making process where the activity space of each decision depends on the sequence of all prior executed activities. We propose an order-based backward induction (BI) method to solve for exact optimal solutions of such planning problems.
The rest of this article is organized as follows. Section II reviews the relevant work related to mathematical approaches for verification planning and sequential decisionmaking using Bayesian networks (BNs). Section III describes the proposed JVCM, which includes the extended paradigm of verification planning, belief models of VAs and CAs, performance measurement rules, and the proposed order-based BI method. The use of the JVCM is illustrated in a case study in Section IV. Finally, Section V concludes this article.

A. MATHEMATICAL APPROACHES FOR VERIFICATION PLANNING
System verification consumes a large proportion of development efforts, which can be as much as 40%-50% of the total development effort [9]. Verification planning has traditionally relied on qualitative assessments performed by subject matter experts and industry standards derived from the collective experience of multiple experts [3]. Given the likely suboptimality of such approaches, methods underpinned by mathematical models have been proposed to support verification planning. We distinguish three categories of mathematical methods that have been proposed in the literature. The first category follows the experience of practitioners and project experts in proposing verification strategies that employ a conceptual model of verification planning [10], [11], [12]. The second category includes those studies in economics and management science that seek to manage the time and cost of testing strategies [13], [14]. The third category belongs to the field of systems engineering, where mathematics and optimization are applied to verification planning [3], [5], [15], [16]. We review the third category because it is directly related to the life cycle of system development, which is the scope of this article.
Fundamental work was done in the third category by Whiteside and Reimann [17] who proposed a methodology for quantitatively assessing the effects of alternate verification, validation, and testing of business practices on system life cycle costs. This model was extended by Engel and Barad [15] as a decision problem and a decomposition method was proposed to find the optimal decision strategy [5]. However, these models present three significant weaknesses. First, the dependence between systems and activities is not captured, even though it influences the selection of activities. A BN-based model was proposed in response to this gap to capture the dependencies between VAs [18]. Second, there is often a lack of knowledge about a system at the beginning of the development of that system [19]. As a result, designing a fixed or static verification strategy early in system development is likely suboptimal [6]. One solution to this problem is to first quantify the impact of a designer's domain knowledge and problem framing on the information collection with a descriptive model [20]. Chaudhari et al. [21] compared various descriptive models to provide the best description of information acquisition decisions when multiple information sources are present and the total budget is limited. Another solution is to use a set-based design to collect information progressively during the process. For example, a set-based design method was proposed in response to this gap to derive dynamic verification strategies that could adapt according to the progression of system development and verification [4], [6]. Third, previous verification planning models were built for single-firm cases while large system development projects involve multiple firms and their coordination [22]. Thus, Kulkarni et al. [23] used a game theory and incentive theory to model multiple-firm scenarios and investigated the alignment of verification strategies between multiple firms.
While system verification relies on CAs to eliminate errors, the methods discussed in the previous paragraphs simplify CAs as properties of VAs instead of as dedicated decisions during the verification planning. For example, the decomposition approach [5] models CAs as partial costs embedded in every VA. Xu et al. [6], [7] considered CAs to be a direct action based on the confidence threshold of a system parameter. Kulkarni et al. [24] assumed that CAs always happen if the system is not in its ideal state. In response to this gap, the BN model of verification proposed in [18] was extended to model CAs [8]. This article leverages such an extended model to integrate VAs and CAs as dedicated decisions in a verification planning problem.

B. SEQUENTIAL DECISION-MAKING USING BNS
A BN is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph [25]. Due to their good model transparency [26], BNs have been widely adopted for statistical representation and Bayesian inference. While BN inference depends on the collected evidence of network nodes, how to observe network nodes and collect evidence is a sequential decision-making problem [27]. It is also known as a binary identification problem if each observation has only two possible results [28]. This problem has been explored in a variety of research domains, such as disease diagnosis [29], fault diagnosis [30], [31], [32], [33], and troubleshooting [34]. Compared with these domains, verification planning in this article is distinguished by the inclusion of CAs whose information is not included as the prior knowledge of network nodes.
Because each action only depends on the current results of all network nodes, these sequential decision-making problems can be viewed as dynamic programming (DP) problems. However, the total number of solution strategies is exponentially large when the network size is very large, which makes it very difficult to solve such problems [27]. Thus, the scope of this article is narrowed down to those BNs whose exact solution strategies can be found; approximate solution strategies for complex BNs are reserved for future studies.
Until now, three major DP approaches have been proposed to solve for exact solution strategies. First, BI is one of the main methods for solving finite horizon DP problems according to the recursive relationship (i.e., the Bellman equation) [35]. It has been applied to various domains, such as verification planning [36], [37], petroleum exploration [38], and circuit design [39]. The BI methods in these studies are based on explicit decision graphs, such as decision trees or influence diagrams. However, because CAs can change the results of VAs, there is no explicit decision graph in this study. Second, iterative methods, such as value iteration and policy iteration, begin with a rough approximation of the solution strategy and solve for the optimal strategy in an iterative way. They are usually applied under the framework of Markov decision processes [40], [41], [42]. For example, Velimirovic et al. [43] used the policy iteration method to locate faults in a power distribution network. Even though iterative methods do not need explicit decision graphs, they are less efficient than the BI method because they require a lot of iterations over all states to obtain the solution. Third, AO* algorithm, originating from graph search problems, is an approximate forward DP method with a heuristic evaluation function [44]. AO* algorithm is superior to other heuristic methods for binary identification problems [31]. For example, Vomlelova' and Vomlel [45] applied the depth first search algorithm with pruning and the AO* algorithm to find a troubleshooting strategy. Warnquist et al. [46] applied AO* to solve the troubleshooting problem of heavy vehicles. However, because CAs can make existing verification results invalid (as described in Section III-B), it is hard to find an appropriate admissible heuristic function to prune unnecessary states. Therefore, this approach is not considered in this article.

A. BELIEF MODELS OF VAS AND CAS
We consider that a given system can be decomposed into a set of system elements and assume that the objective of system verification is to verify relevant requirements for these elements. We conceive system verification as a set of tuples of system parameters θ 1 , . . . , θ I about these requirements and the VAs that provide information about such system parameters, denoting the resulting verification evidence of a VA by μ j with j = 1, . . . , J with [2]. Using the modeling framework presented in [18], we built a basic system verification model as a BN where nodes representing VAs {μ j } are treated as observable nodes (those whose node states can be observed directly) and nodes representing system parameters {θ i } are treated as hidden nodes (those whose value states cannot be observed directly but are inferred from the values of the observable nodes). For example, consider a computer system that has two parameters, processor speed (denoted by θ 1 ) and computer speed (denoted by θ 2 ), and each parameter has its own VA (denoted by μ 1 and μ 2 , respectively). The BN can be built accordingly, as shown in Fig. 1(a).
Because the interpretation of the information provided by the VAs is subjective [47], we capture the information about system parameters as beliefs. Without loss of generality, all nodes are assumed to be binary (i.e., two node states, such as pass/fail or compliant/noncompliant). The nature of Bayesian analysis, and of BNs by extension, allows for easy removal of this restriction and use of any number of discrete values and even continuous belief distributions [48]. The specific beliefs of a network node are presented as a conditional probability table (CPT) in this article. Each CPT summarizes the dependency relationships between a node and all its parent nodes. After all CPTs are elicited as prior distributions of a BN, the impact of a VA (denoted by μ j ) on the beliefs is modeled as follows: 1) a verification result is collected after executing μ j (i.e., an observable node μ j is observed), and 2) the posterior distributions of the network nodes are updated by the Bayesian rule.
CAs are defined as those activities that correct errors or defects that are found during system development [8]. In our previous study [8], uncertain evidence was leveraged to model the effects of CAs on the confidence of engineered systems. Three basic types of CAs were modeled with their uncertain evidence: rework, repair, and redesign. First, when a rework activity was executed to replace a faulty element with a working one, we assumed that this activity had no impact on other elements as well as their dependency. Therefore, uncertain evidence was applied to the factor of the CPT that is the conditional distribution given that all parent elements have no errors. For example, let us assume that a rework is conducted and a new processor replaces the faulty one. Because the new processor is an exact replica of the faulty one, the speed of the new processor shares the same parameter θ 1 with the faulty one. So, the information of rework is interpreted as uncertain evidence applied on the factor X 0 (θ 1 ) of θ 1 . This uncertain evidence is modeled as a node ϕ 0 in Fig. 1(b).
Second, when a repair activity is executed to modify a faulty element with parts, processes, or materials that were initially unplanned for that element, it is assumed that repairing the element has impact on the beliefs of the corresponding parameter in the verified system. We can apply uncertain evidence to represent the impact of repair on the beliefs. For example, a repair activity is conducted to improve overall computer speed. As the repair is applied to the system, this piece of evidence is captured as uncertain evidence applied on θ 2 directly. The uncertain evidence is shown as a node ϕ 1 on θ 2 in Fig. 1(c).
Third, when a redesign activity is used to rebuild some faulty elements based on a new design, the beliefs of all relevant parameters will change after redesign. Different from rework and repair, redesign may affect the dependency between the parameters of the system. Thus, the structure of the BN may need adjustment. For example, a redesign activity is conducted, resulting in a new processor. The speed of the new computer (since it now uses a different processor) is denoted by θ 2 and the speed of the new processor by θ 1 . New VAs are defined for each of the attributes of the new system. Because past information is used, we can model a dependency between θ 1 and θ 1 , since the knowledge obtained on the original processor with μ 1 shapes the confidence of the performance of the new processor. The new structure of this computer system is shown in Fig. 1(d). After the new network structure is specified, the beliefs of all relevant parameters (including θ 1 , θ 2 , μ 1 , and μ 2 ) need to be reestimated as uncertain evidence, and the beliefs of a BN are updated with the Bayesian rule.

B. EXTENDED PARADIGM OF VERIFICATION PLANNING
In this section, we extend the paradigm of verification planning by considering both VAs and CAs as independent decisions. While VAs and CAs are two kinds of activities, they are tightly connected to each other because errors and defects are identified by VAs and corrected by CAs. Thus, a verification process can be viewed as the process of identifying and eliminating errors. As part of development of the system, we consider two key process constraints. First, verification processes can be decomposed into a set of time events. Because a life cycle consists of relatively independent phases with different requirements [49], we use a time event to represent the life cycle phase. Second, there is a sequential constraint between VAs and CAs that CAs must be conducted after collecting certain VA results. This is caused by their inherent logical relationship in handling errors and defects. That is, without the explicit evidence provided by VAs, which indicates the presence of errors, it is unreasonable to execute CAs to eliminate these errors. For example, if the temperature of a machining tool is tested and found to be normal, there is no apparent need to change the tool parameters.
Starting from these two process constraints, we model the verification process as a sequential process with time events t = 1, . . . , T . For simplicity, we make two assumptions about the sequential process. First, the number of time events is not predetermined and the process terminates only when there is no need for more activities. That is, we do not externally restrict the number of time events in this study. It is notable that this sequential process does not have an infinite horizon because the number of activities in a BN is finite. Second, only one pair of VAs and CAs is conducted at each time event and the VA is followed by the CA. That is, each time event has two time points and only one activity is conducted at each time point. This assumption can be relaxed as future works, such as adding horizon constraints or conducting more than one activity in parallel. Therefore, the paradigm of verification planning in this article is defined as a sequential process of repeating pairs of VAs and CAs at T time events, as shown in Fig. 2.
At each time event, verification planning consists of assigning a VA and CA from their own activity spaces, each of which is a set of all eligible activity actions, including the action "no activity" (NA). It is notable that there are two constraints about the activity spaces of VAs and CAs. First, implementing a CA can make the existing results of a VA invalid if the result of the VA depends on the corrected parameter. The reason for this is that once a CA changes a system parameter, all existing verification results that depend on such a parameter lose their credibility in deducing accurate posterior beliefs of the system. For example, as shown in Fig. 1(a), if a CA is implemented on θ 1 , the result of μ 2 becomes invalid because it is the child node of θ 1 . Second, it is unnecessary to repeat an activity if this activity has been executed and its result remains valid. Thus, such activities are not included in the activity space. In particular, the results of CAs are assumed to be always valid for simplicity. According to these two constraints, each CA can only be executed once. It can also be inferred that each VA may be executed multiple times if some CAs influence the relevant parameters of such a VA. Therefore, the activity spaces of VAs and CAs depend on the sequence of all executed activities before the current decision, and they always change along the entire verification process.
With the extended paradigm, the solution of verification planning is the assignment of VAs and CAs along a verification process, which is referred to in this article as the joint verification-correction strategy (JVCS). Because each VA has multiple possible results (e.g., Pass/Fail), different combinations of activities exist along the same process and all of these possible combinations can be presented as an activity tree. One example of an activity tree is shown in Fig. 3. Each possible combination of activities is called a verification path in this study. In the example in Fig. 3, there are five verification paths, and all verification paths share the same initial activity. We align activities vertically if they are executed at the same time point. At the end of each verification path, the verification process terminates with a certain terminal state (denoted by "Stop," as shown in Fig. 3.

C. PERFORMANCE MEASUREMENT OF JVCSS
While the paradigm of verification planning is extended with CAs, this extended paradigm shares the same purpose as previous paradigms: maximizing confidence on verification coverage, minimizing risk of undetected problems, and minimizing invested effort [8]. These three aspects can be assessed in terms of their inherent value with respect to satisfying the objective of the project. Thus, some performance measurement rules can be used to blend the three objectives as a unified value function [50]. We use this value function to measure and compare the performance of JVCSs.
In this article, three value factors are considered to calculate the value function of a JVCS. The first value factor is activity costs, which is a fixed amount of financial resources necessary to conduct either a VA or a CA. It is denoted as C(μ j ) for μ j and C(ϕ k ) for ϕ k . For example, if a rework activity is executed to replace a faulty element, corresponding activity costs could include the purchasing cost of new elements and labor fees to replace the elements. The second value factor is failure costs, C f (μ j ), which is incurred when the result of a VA is found to be Fail. The third value factor is system revenue, B(θ i ), which is obtained once the system is deployed and operates correctly. B(θ i ) depends on the evolution in confidence that the system is operating correctly as VAs are performed. We consider the system to be deployed only when the confidence levels {P(θ i )} of the target parameters {θ i } reach or surpass certain thresholds, H u .
For simplicity, all of these value factors are summarized for each verification path. Each verification path could be stopped in two situations. First, the confidence levels of all target parameters reach their thresholds {H u }. Second, the action "NA" is selected when assigning a VA. Consider a JVCS that has W verification paths Z 1 , . . . , Z W , and each verification path consists of a set of time points. The overall value of a verification path Z w is calculated as where δ(·) is an indicator function whose value is 1 if the statement is true and 0, otherwise. Because all verification paths of are possible, the performance of this JVCS can be calculated as the expected value E (U (Z w )), which is the weighted sum of U (Z w ) where P(Z w ) is the probability of a verification path Z w that is calculated by the product of the probabilities of all activity results along Z w

D. ORDER-BASED BI METHOD
Using the models presented earlier, the verification planning problem can be conceptualized as a search for an optimal JVCS. A method that efficiently copes with the complexity of this sequential decision-making problem, in terms of the use of computational resources, is lacking. While the specific search method is independent of the JVCM presented in this article, we suggest one such method to respond to this need. Because the belief distributions of all activity results can be obtained from the BN model and collected evidence, this planning problem satisfies the assumption about the perfect model of DP methods [51]. Thus, we followed the DP method procedures to analyze this verification planning problem using the properties of verification processes. Our analysis is organized in three steps. First, we used the state concept in DP methods (called verification state in this article) to guide the search for optimal JVCSs. However, as activity spaces changed along the verification process, it was insufficient to represent each verification state only by their time events. Instead, the history of all previous activities, as well as the activity type of the current activity, needed to be used to determine the current verification state before implementing the activity. In this study, we used a vector to represent each verification state S m , and defined it as S m = (Y (S m ), Y (ϕ 1 ), . . . ,Y (ϕ K ), Y (μ 1 ), . . . ,Y (μ J )), where the following holds.
1) Y (S m ) denoted the type of the current verification state (i.e., activity type, 0 for VA and 1 for CA). 2) Y (ϕ k ) denoted the type of the result of a CA on the parameter θ i (e.g., its values are {0, 1} for not corrected or corrected, respectively). 3) Y (μ j ) denoted the type of the result of a VA μ j (e.g., its values are {0, −1, 1} for not verified, verified with a Fail result, and verified with a Pass result, respectively). According to the extended paradigm in Section III-B, there are following four constraints for the activity spaces of all verification states: 1) if Y (S m ) = 0, the activity space is the set of VAs whose result type is 0 (i.e., Y (μ j ) = 0) and an action "NA", 2) if Y (S m ) = 1 and the result of the previous VA is Pass, the activity space of CAs has only one action, i.e., "NA," 3) if Y (S m ) = 1 and the result of the previous VA is Fail, the activity space of CAs is the set of CAs whose result type is 0 (i.e., Y (ϕ k ) = 0) and an action "NA," 4) whenever the value of Y (ϕ k ) for θ i is changed from 0 to 1, the values of all Y (μ j ) = 0 must be set to 0 if μ j is the child node of θ i in the BN. Second, because this representation has included all necessary information of a verification state, S m , the strategy that starts from S m only depends on this verification state S m . In this article, the expected value of S m is represented by the expected value of the strategy that starts from S m , which is denoted as U (S m ). Therefore, the Bellman equation in DP [51] can be used to deduce the expected value U (S m ) according to the recursive relationship between the expected value of verification states. Furthermore, if the optimal expected values of all verification states are found, then the optimal activity for each verification state can be generated by identifying the following verification state that has the optimal expected value. The set of all optimal activities along a verification process constitutes the optimal JVCS. Third, it should be noted that, with such a representation, all verification states are irreversible. That is, whenever a verification state S m occurs at a time point, this verification state will never occur again in all the following verification paths. Even though a CA may reset the results of VAs, this reset action is also recorded as Y (ϕ k ) in the vector representation. According to this property, there exists a decision graph between all verification states even though this decision graph is not explicit. Therefore, we propose an order-based BI method to search for the optimal JVCS.
The proposed method is implemented in two steps to solve the verification planning problem. The first stage aims to determine the decision graph among all verification states, which is called the order iteration stage. All verification states are listed and initialized with an order function Q(S m ) = 0. For each verification state S m , the order functions of its next state Q(S m ) are examined by following their sequential relationship. If the value of each order function Q(S m ) is not larger than that of Q(S m ), then the value of the order function Q(S m ) will be assigned with the new value Q(S m ) + 1. This assignment is to ensure that all order functions of a JVCS are always increasing. All verification states are examined iteratively until there is no more change for their order functions. The second stage follows the BI method [35] to evaluate the state values of all verification states, which is called the BI stage. All verification states are arranged in decreasing order according to their order functions. Then, the value of each verification state is updated by comparing all following states and selecting the best ones from them, which is summarized as (4) shown at the bottom of the next page.
When the values of all verification states are determined, the optimal JVCS is generated by identifying the optimal activities for all possible states along the verification process. The corresponding algorithm is shown in Algorithm 1.

A. PROBLEM DESCRIPTION
This section leverages a notional satellite communication instrument in a satellite as a demonstrative case to validate our framework. The notional instrument has been used to support prior research in verification [8]. Consider that the instrument is formed by a signal generator, an amplifier, and an antenna, as depicted in Fig. 4. We restrict our attention to the following performance parameters: We use the effective isotropic radiated power (EIRP) of the communication instrument (denoted by θ 6 ) as the primary target, which we characterize as a function of the signal generator output power (denoted by θ 1 ), the amplifier gain (denoted by θ 2 ), and the antenna gain (denoted by θ 3 ). Furthermore, we consider the output power of the integrated assembly formed by the signal generator and the amplifier as an intermediate system parameter of potential interest for the verification campaign and denote it by θ 4 . In addition, we consider a prototype for this communication instrument as potentially interesting for the verification campaign and denote its EIRP by θ 5 . Each parameter is subjected to a dedicated VA, denoted by μ 1 , . . . , μ 6 , where μ j provides information about θ i for i = 1, . . . , 6. While this instrument is verified through a set of VAs, each parameter has CAs to correct potential errors and defects. Without loss of generality, this case study takes repair activities as an example of CAs. Therefore, the term "CA" always refers to repair activities in this case study.
To illustrate the advantage of the proposed JVCM, we compare the resulting verification strategy against those obtained using benchmark rules from the literature [4], [6]. Specifically, we employ rule-based corrections in current strategy planning methods, as identified in Section II-A. These benchmark methods are based on three main principles. First, it is assumed that a CA is only executed on the closest parent parameter node of its last VA. For example, if the result of μ 4 is Fail, only θ 4 can be corrected. That is, given a VA μ j , the activity space of a CA only has two options: "NA" and ϕ(μ j ). Second, if a VA μ j fails, a CA is triggered automatically as long as the confidence of the target parameter drops below a predefined threshold, H l . If the result of μ 4 is Fail and P(θ 6 ) < H l , ϕ(μ 4 ) will be executed. Third, for simplicity, the impact of a CA is realized by setting the result of the last VA as Pass.

B. MODELS AND DATA
The system parameters of this optical instrument and their possible VAs are modeled as the BN shown in Fig. 5. System parameters are denoted by θ i and candidate VAs are denoted by μ j . For clarity, it should be noted that the state of the performance of the integrated assembly (θ 4 ) depends on the performance of the signal generator (θ 1 ), the performance of the amplifier (θ 2 ), and the performance of the cabling connecting them (embedded in θ 4 ). Similarly, the performance of the overall communication payload (θ 6 ) depends on the performance of the prototype (θ 5 ), the performance of the integrated assembly (θ 3 ), the performance of the Antenna (θ 4 ), and the performance of the cabling between them (embedded in θ 6 ). The CPTs of all nodes are synthetic and have been generated using probabilistic causal interaction models (i.e., generalized noisy-OR and noisy-AND models) [52], [53], [54], [55], which take into account the physical meaning of the different nodes when estimating their mutual effects for reasonability of the data. This approach has been used in prior verification research [8], [56].
In this experiment, we assume that system revenue is driven by system parameter θ 6 . Hence, θ 6 is set as the single target parameter. Its marginal prior confidence is P(θ 6 ) = 0.676. The threshold for the system deployment rule is set as H u = 0.90. All value factors have also been synthetically generated in thousand-dollar units, $1000. The activity costs of the different activities, as well as the failure costs, are provided in Table 1. The likelihood ratios of all repair activities are also provided in the fourth row of Table 1. The revenue B(θ 6 ) has been set to 20 000, which is not larger than the summation of all costs. The purpose is to embody the selection tradeoff between different activities in practice. That is, the expected value of a suboptimal strategy can be negative.

C. EXPERIMENTAL RESULTS
With this model, we have conducted the order-based BI algorithm to solve for an optimal JVCS. The proposed JVCM and algorithm are realized with Python 3.6 and Bayes Net Toolbox for MATLAB [57]. With six system parameters and six VAs in this BN, there are 2 × 2 6 × 3 6 = 93 312 total verification elseif "NA" is selected as a VA max  states. After the order iteration, the order functions of all of these verification states are updated with 20 iterations for 395 s. The resulting order functions of all verification states range from 1 to 38. When the BI concludes, the optimal activities at all states are identified to constitute the optimal JVCS. With 20 verification paths (i.e., the number of terminal "Stop" states), the optimal JVCS has 193 tree nodes (excluding terminal "Stop" states). The depth of this JVCS is 7 time events (i.e., 14 time points). Due to limited space, only the first 5 time events are plotted in Fig. 6 and all dashed lines have been omitted. Each node of the JVCS suggests optimal activity at the corresponding state. For example, at the initial state, it is suggested to verify μ 5 first. If its result is Fail, P(θ 6 ) drops to 0.609 and a CA on θ 5 is suggested. However, if its result is Pass, P(θ 6 ) increases to 0.710 and no CA is necessary. Then, the verification process goes to the next time event. The expected value of this optimal JVCS is 11 523 according to (2) and (3). The total running time is 4674 s. We also solved for the verification strategies with current correction rules as introduced in Section IV-A. In this experiment, three different thresholds, H l , are evaluated, specifically 0.2, 0.5, and 0.9, respectively. The largest value of H l is set to 0.9 because the system is deployed once P(θ 6 ) reaches H u = 0.90. The CA cost after a VA μ j is set the same as that repair cost on θ i , which is the nearest parent node of μ j . They are shown in the fifth row of Table 1. Because there are no decisions for CAs in these rules, the verification state only depends on six VAs each of which has four states, including "Not Verified," "Verified with the Fail Result," "Verified with the Pass Result," and "Corrected." Thus, there are 4 6 = 4096 states. The order-based BI method described in Section III-D was employed and implemented using the same tools (Python 3.6 and Bayes Net Toolbox for MATLAB) [57]. The optimal strategies are shown in Fig. 7. The expected values of the strategy are 4584, 8819, 9816 for H l = 0.2, 0.5, and 0.9, respectively. The total running times are 499, 517, and 519 s for H l = 0.2, 0.5, 0.9, respectively.

D. DISCUSSION
As given in Table 2, using the proposed JVCM improves the performance of system verification by at least (11 523 − 9816)/9816 = 17.4%, which shows the advantage of our model in absolute terms. This is attributed to the explicit decisions of CAs. As more CA choices are provided at each verification state in the JVCM, we can select CAs in a larger activity space and choose the one with the best expected value. That is, for each parent state, the child state with a better performance can be found under the JVCM. If this deduction is repeated for all verification states, the performance of the initial verification state (i.e., the performance of the JVCS) is higher than those of the benchmark rules. However, the computational time required to use the proposed JVCM is much larger than that required when using the benchmark methods, specifically by a factor between 4674/519 = 9.01 and 4674/499 = 9.37. This is also explained by the size difference of the activity spaces. With more CAs provided, the total number of verification states increases from 4096 to 93 312. As all verification states have to be examined, the JVCM has higher overhead. This time difference does not account for the time required to construct the model, which grows significantly when including CAs as decision nodes.
These results provide interesting insights for the selection of planning rules. If the total number of verification states is small and the computational resources are sufficient, the proposed JVCM may be worth using in strategy planning. Otherwise, the benchmark rules may also be considered as alternative methods to obtain approximate strategies. In particular, these benchmark rules can be analyzed from two perspectives. First, CAs in the benchmark rules are tied with the parameter that has been verified, which leads to smaller activity spaces. More efforts can be spent on how to expand the activity spaces of benchmark rules effectively in future work. Second, the predefined thresholds (i.e., H l ) play the same role as the corrective decisions of the JVCM that decide which CA is selected. It is found from this case study that among these three benchmark rules, H l = 0.2, 0.5, 0.9, and the benchmark rules with higher thresholds always have better performance for strategy planning. Especially, if CAs are executed whenever a Fail result is collected (i.e., H l = 0.9), the generated strategy has the best performance. While there are other possible lower threshold values and each time event can be assigned with a different threshold, investigating the optimality of different threshold rules is reserved for future work. A systems engineer could use these findings to decide if using the JVCM is worth the additional modeling effort with respect to the benchmarks.
The time analysis for the calculation of the optimal JVCS also shows the reason why our proposed method can outperform successive approximation methods. While the 20 iterations of all order functions costs 395 s during the order iteration stage, it costs 4674 − 395 = 4279 s to update all state values during the BI stage. So, updating the order functions is 4279/(395/20) = 217 times more efficient than updating the state values. After the update of order functions between verification states, the state value of each verification state only needs to be updated once. In contrast, the successive approximation methods solve for the optimal strategy by iterating all verification states two or more times. If all states were evaluated twice in the best scenario, the total running time was approximately 4279 × 2 = 8558 s, which is much larger than that of our proposed method. Thus, the order-based BI method is the best choice to solve for exact optimal verification strategies.

V. CONCLUSION
We have presented a JVCM that jointly incorporates VAs and CAs as dedicated decisions. A BN model is built to quantify the impact of VAs and CAs on belief updates with Bayesian inference. An extended paradigm is used to establish the dependency relationship between VAs and CAs with all necessary constraints along verification processes. Three value factors are also used to measure the performance of the verification strategies. Considering the dynamic nature of the verification planning, we provide an order-based BI method to seek the exact optimal strategies that specify the optimal activities for all possible verification states.
Use of the JVCM was illustrated with a case study. A notional satellite communication instrument was used to evaluate the performance of the proposed JVCM against three benchmark rules that represent current correction rules in the literature. The instrument system and all activities were modeled according to their physical meaning in verification planning, and all data were generated synthetically. The results of the case study show that the proposed JVCM outperforms benchmark rules with a larger expected value, but it costs more computational time. This difference is attributed to the explicit decisions of CAs provided by the extended paradigm. The advantage of the proposed search algorithm was also shown by time analysis that the proposed search algorithm saves computational time by leveraging the order functions of verification states.
We identified several opportunities to extend our work. First, the JVCM presented in this article embeds some major assumptions that were necessary for simplicity. These include a restriction to one pair of VAs and CAs at each time event and an unrestricted number of time events. We suggest future work to explore the effects of refining the temporal aspects of our model. Second, the case study has been limited to one type of CA. Future work that illustrates how verification planning is supported by the JVCM with other types of CAs is suggested to contribute to generalization of the model. Third, the threshold values in the benchmark method seem to have a strong effect on the potential value of the proposed JVCM. We suggest investigating the existence of optimal thresholds in the context of verification planning. Fourth, the proposed search algorithm used in this article is only applied to small-size networks whose activity spaces are very limited. We suggest work to develop methods aimed at reducing the computational time required to solve the verification planning problem for large network models.