Real-Time Verification of Network Properties Based on Header Space

The past ten years have seen increasingly rapid advances in the field of network verification. Data plane verification plays a crucial role in this field. Recent developments of SDN, which has been proposed for improving network flexibility and programmability, make the requirement of the dynamic verification. It also provides the ability to dynamically acquire the information of the network state and the rule updating behavior, which makes it possible to perform real-time data plane verification. Previous researches for the real-time data plane verification either perform the verification on the simplified forwarding rules or trace the symbolic headers cross the network behaviors on each switch. Inspired by the discussion of the header space in HSA, we focus on rules themselves and define the computation of transform functions in header space with BDD expressions to get the connection information of rules. The BDD expression makes the transform functions and the rules can be merged as the simplest form for reducing the run time. We propose an updating algorithm based on the matrix operation for incrementally updating rules. The typical network invariants are translated to the requirements of the matrix model. At last, we try to extend the model to verify the invariants in multi-domains networks. The prototype NetV has been compared with NetPlumber and APT on the rule sets of the Standford backbone network and the Internet 2 network. The experiments show that NetV performs better than these two tools on the two rule sets with OpenFlow format. A simple experiment of the real-time verification of two domains is conducted. The result indicates that NetV has advantages over the simulative method by tracking the symbolic headers.


I. INTRODUCTION
Recently, there has been renewed interest in network verification [4], [5]. Managing a large-scale network is one of the most significant challenges. It is error-prone to set switch rules manually with experience. It is also hard to manually reason the failure of the network because of the scale of large networks and the diversity of protocols. The network operators need to know which packets from one host can reach another host during the debugging process. The effect of the new adding configuration should be explored to keep the network stable and reliable. Inspired by the well-known technologies in the field of software and hardware, the entire The associate editor coordinating the review of this manuscript and approving it for publication was Ting Wang . network can be seen as a ''program'' to take packets from the input and delivery those transformed packets to the output.
In the field of network verification, data plane verification has attracted considerable attention since the data plane is more natural to be modeled with the well-understood semantics compared with the control plane. In conventional networks, the snapshot of the configuration and topology of the network can be collected for the off-line verification. Xie et al. [6] first proposed a method for static reachability analysis from a snapshot of the configuration state. Mai et al. [7] and NetSAT [8] perform the data plane verification based on SAT Solver. Al-Shaer et al. proposed ConfigCheck [9] to verify network reachability by using computation tree logic (CTL) and symbolic model. They extended their work and proposed FlowChecker [10], which uses the binary decision diagram (BDD) to encode the VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ FlowTables and uses the state machine representation to model the interconnection of switches. Al-Haj et al. further extended the work of FlowChecker for adjusting the structure of multiple FlowTables [11]. FLOVER [12] encodes the flow rule sets into Yices assertions. HSA [1] treats the behavior of the switch as a transfer function on packet headers. It uses a geometrical space with bit wildcard expression, which is called header space to represent the set of packets. The reachability is verified by tracing the symbolic packet headers (all wildcard) from all in-ports of the network. Zhang et al. designed two testing protocols for inter-domain loop tests. It utilizes HSA to verify the reachability between the in-ports and the out-ports of a single-domain [13]. Newly proposed ''Software-defined Network (SDN)'' decouples the control plane from the forwarding plane, which makes it possible for configuring computer networks to be programmable. However, it is naturally associated with increased risks for misconfiguration. By using OpenFlow [14], the centralized controller adds flow entries into flow tables on switches for handling network flows based on the deployed applications. Meanwhile, the centralized observation of the flow tables and the network topology provides the opportunity to verify network behaviors automatically and dynamically. Thus, it raises a new requirement to verify the data plane in real-time, which is more complicated. There are two main gaps in previous works after introducing SDN. Tools based on SAT Solver always provide a single counter-example when the violation happens. It is hard to reason the potential error by the counter-example. They all suffer the dynamic performance issue and the scalability problem.
For addressing these problems, VeriFlow [15], [16] is the first tool that can dynamically verify a rule in real-time before it is installed. The packet equivalence classes (ECs), which represent the packet set following the same network behavior, are computed from the rules stored with the trie structure. The paths are calculated for each EC. However, it is hard to introduce the rewrite action for changing the packet header. Li et al. introduced the field transition rules into VeriFlow for defending covert channel attacks [17]. Since the ECs, which are calculated by the match fields of rules, can not imply that they follow the same paths in the network with the modifying rules, it is hardly used to verify the reachability with the rewrite rules in SDN. NetPlumber [2] presents a method to perform incremental verification by running HSA checks incrementally. The wildcard expression causes the explosion of the expressions after the minus operation in the header space. Yang and Lam proposed Atomic Predicates (APs) Verifier [18] to compute reachability trees. The forwarding tables are merged into the port predicates for representing packet filters. They precompute the packet ECs with BDD expressions and then label each EC a unique integer identifier, which is called the atomic predicate. The reachability trees are obtained by traversing the identifiers which are injected into the in-ports of the network. Using identifiers reduces the computation of traversing the symbolic packet sets with BDD expressions. They improved their work and proposed APs for Transformers (APT) [3] for scaling to large networks with transformers. The APs are computed by iteratively using the new predicate, which is transformed by the transformers on the current APs to update the APs. The dynamic performance is limited by the incremental computation of the APs when updating a rule.
In this paper, inspired by HSA [1], we redefine the function of rules based on header space and separate the header transformation and the position transformation into two dimensions. The header transformation only changes packet headers. Then, we use the matrix structure to model the connection of rules. The position transformation is implied in the matrix structure. The element of the matrix is the header transformation on header space, while it also implies the position transformation from the input of a rule to the input of another rule. Different from the HSA, we define the computation of the composition between two rewrite actions rather than performing rewrite actions on the symbolic headers. For example, assuming a rewrite rule with parameter xxxx0010 and a next-hop rewrite rule with parameter xxxx01xx, they can be treated as one rewrite rule with parameter xxxx0110 if we only consider the reachability from the source to the destination. The definition and computation provides two advantages. If we change the constraint of the input after we complete the computation of a specific field, it only needs to do a small scale computation with the changes to get the reachability changes of the whole network rather than to do the simulation again on the symbolic packets of the changes across the field. Second, the equivalence of the header transformations makes some rules that can be merged as a single expression without losing their functionality since a flow entry may contain multiple changing header actions. Thus, it makes this method can be easily extended to the multilayer controller for verifying the inter-domain invariants. We then use the BDD expression to represent the domain and the range of the transformation for eliminating the uncertainty and the explosion of expressions. Thus, two rules can be merged as an abstract one with a BDD domain if they have the same functionality. We also propose the incremental updating algorithm to update the connectivity matrix, which represents the complete connection information of the network. The common invariants are represented by using the model and can be incrementally verified when updating rules. The experiments show that it performs better than NetPlumber and APT.
The contributions of this paper are: 1) We redefine the rule function based on header space and present the algorithm of transformation and inverse transformation on the binary decision diagrams (BDDs) presentation. It also makes the rules can be merged based on its functionalities, which remarkably speeds up the updating process.
2) We further define the transform function and use the matrix structure to model the connection of the rules. We then propose an updating algorithm to dynamically update rules and maintain the reachability information of the entire network.
3) We present the primary invariants verification based on this model and then extend it to the multi-layer controller for verifying the inter-domain invariants.
The remaining parts of this paper proceed as follows: Our basic network model, which including the definition of the rule function, the transform function on BDD expressions, and the fundamental adjacency matrix is presented in Section III. The dynamically updating algorithm is presented in Section IV. In Section V, we present the verification of some common invariants and the inter-domain loop verification. The performance evaluation of NetV is presented in Section VI. The conclusion is discussed in Section VII.

II. RELATED WORKS
This paper focuses on real-time data plane verification. Toward the off-line Verification, Xie et al. [6] first proposed a method that jointly reasons about how each behavior affects the reachability from a snapshot of the configuration state for each router for static reachability analysis. Mai et al. [7] and NetSAT [8] perform the data plane verification based on SAT Solver. Anteater collects the snapshot of the topology and the forwarding information (FIBs) and models the invariants as the instances of SAT. Then, the SAT solver is used to verify whether the instances are satisfied. Al-Shaer et al. proposed ConfigCheck [9] for reachability verification from the configuration. They use the state transition to model the network and construct a transition relation for each device. The BDD is used to represent the state and verify if the symbolic state satisfies the state of the transition relation. They also use CTL to represent security properties. They extended their work and proposed FlowChecker [10] for OpenFlow switches by encoding the FlowTable matching semantic with BDDs. For the intra-federated flow isolation, they make pairwise comparisons for each pair in the domain. Al-Haj and Tolone further extended the work of FlowChecker for adjusting the structure of multiple FlowTables by treating the table as the location of the state instead of the device to check the FlowTable pipeline misconfiguration [11]. FLOVER encodes the flow rule sets into Yices assertions for verifying the property by checking if the property formula is unsatisfiable [12].
Their works meet two issues for practice. First, the verification time cost is high. Second, the SAT/SMT Solver based tools only return a single counter-example for the violation,which is hard to be used to analyze the reason for failures. HSA [1] uses transfer functions to represent the switches. They trace the symbolic packet headers which are injected from the source node as the transfer functions of the switches transform them. They also translate the invariants to the requirement of the reachability. They can get all the information on reachability and the change of the packet headers from the source node. However, the introduction of SDN provides the ability to dynamically get the network state changes and the updating of the data plane rules. It also makes a requirement of the real-time network verification.
Toward the real-time data plane verification, Veri-Flow [15], [16] was designed for real-time verifying of network-wide invariants. The filters of rules which are stored in a trie tree are computed as the ECs by using a complicated method. When updating a rule, they compute the affected ECs and then generate a forwarding graph for each affected EC by connecting the involved forwarding rules. They speed up the real-time verification by computing the affected ECs for the forwarding rules. However, the ECs can not be used for the header change rules, since the headers may change through a path and the path can not be represented as the same filter on each switch. Recently, Li et al. [17] introduced the field transition rules into VeriFlow to defend the covert channel attacks. The generated ECs can be periodically updated when updating rules. The bi-directional forwarding graphs are constructed for each maintained EC. When adding a header change rule, they correlate the new generated ECs and the affected ECs after header change and then compare the forwarding graphs. However, the header change rules still can not take action in the forwarding graphs for verifying the reachability.
NetPlumber [2] performs incremental verification by running HSA checks incrementally. They use rules instead of switches to be the nodes of the graph. The pipes between rules reduce the computations when propagating flows from the pipes. The incremental update is done by maintaining the flows (symbolic headers) that are propagated from the source node on these pipes. When updating a rule, the flows before the rule are propagated through the newly updated pipes. Zhang et al. [13] designed two testing protocols for inter-domain loop tests. They use the HSA to get the reachability of the given headers between the in-ports and the outports of a single-domain and check if there are inter-domain loops. Following this concept, they can also use NetPlumber for the real-time inter-domain loop test. Compared with the NetPlumber, we use BDD expression to resolve the explosion of the wildcard expression after multiple union and difference operations. The merged rules reduce the redundancy of the rules that perform the same behavior with different wildcard expression filters. The composition operation of the transform function in header space and the matrix operation provides a more flexible way to acquire the desired end to end reachability associated with a transform function.
Yang and Lam proposed APs (Atomic Predicates) Verifier [18] to verify the reachability of the network. They translate the forwarding rules to the port predicates. They reduce the propagating time by precomputing the APs associated with an integer identifier and propagating these identifiers for the all-pairs reachability. They improved their work and proposed APT [3] for scaling to large networks with transformers. The APs with transformers are computed by iteratively getting the transformed predicates from the current APs and using these predicates to update the APs. Their experiments show that the time of traversing the identifier is short. However, the dynamic performance of the APT is limited by the per update run time of the APs, which is VOLUME 8, 2020 strongly related to the number of the existed APs. The APT is suitable for the network with many rules which generate little APs. From our experiment in Section VI-D, the rule set generates many APs, and it takes non-negligible time for the per rule update.

III. NETWORK MODEL
We model an SDN network as a digraph of the symbol rules which contain the forwarding action and the header transforming action. Each abstract rule can be the SDN rule directly or the merging of the SDN rules. The basic rule function is defined based on the header space at first. Then we design an algorithm to do rewrite action on the BDD expression for the composition of functions in header space. At the end of the section, the adjacency matrix is build to represent the connection of rules. The operation on the matrix reflects the composition and the merging of the functions. The symbols used in this section are described in Appendix A.

A. FUNCTIONS OF RULES
HSA [1] has defined the header of a packet as {0, 1} L , where L is the length of the header. This space is called Header Space, H. The position of a packet is identified by a directed link in the network. We extend their definition of Network Space and model the network as a set of links and the rules which connect these links. A packet is defined as (h, l) in the network. h is the header, and l is the position of the packet. It represents a packet with a header traversing on a link (including the virtual link between flow tables). The identifiers of the switches ports are replaced by the identifiers of the links. Therefore the space {0, 1} L × {l 1 , . . . , l N } is called the network space, where l i is the integer that represents the unique directed link in the network.
In SDN, a rule is a tuple of Match fields and Action set. The action set is an ordered set and ends up with a forwarding action. In OpenFlow, there are two main types of actions: the action of modifying the headers of the packets and the action of forwarding the packets. The notations of the two actions are written as a m and a f . They can be represented as functions. The two types of actions take action in different dimensions. The forwarding action only changes the packet's position and can also copy the matched packet to different ports. Thus we introduce the ''+'' to represent the increase of the packets' counts. The forwarding action a f is written as a f (l) = l i 1 + l i 2 . . . + l i k that means the a f copy and forward the matched packet to the specific links.
The modifying action a m only changes the header fields of packets. It is written as a m (h) = h * . The encapsulation and de-encapsulation actions are defined in Appendix B. In the main body of this paper, we only consider the rewrite function a rw . For the rewrite function, every bit where m is the mask of the rewrite value, v is the rewrite value, and the condition means that if the specific bit of m is 1, the corresponding bit of v is 0. Therefore, the composition of two rewrite actions a rw,c (h) = a rw,2 a rw,1 (h) = h * , and for every bit h * i = ((m 2,i ∧m 1,i )∧h i )∨((m 2,i ∧v 1,i )∨v 2,i ). The parameters of a rw,c are computed as m c = m 2,i ∧ m 1,i and v c = (m 2,i ∧ v 1,i )∨v 2,i . The composition of more modifying actions can be recorded as one. For example, assuming two rewrite actions a rw,1 and a rw,2 with parameters (11000000, 00101101) and (11110000, 00000110), which can be written as xx101101 and xxxx0110 in wildcard expression, the parameter of composition a rw,c is (11000000, 00100110). If a header h of a packet is 10000001, the a rw,1 (h) is 10101101. The a rw,2 a rw,1 (h) is the same as the a rw,c (h), and the result is 10100110.
A rule can be modeled using the composition of actions that maps a packet with header h arriving on link l to a packet set. A rule is defined as where X i is the domain of r i , which is always the wildcard expression of the match fields of a rule. Therefore, the composition of the two rules is defined as If the conflict part of two rules is exiled, the merging of two rules in the same table is defined as Two rules take action with disjoint domains. If the action sets of two rules are the same, the merging can be written as a single one with domain X i ∪ X j .

B. THE MODIFYING ACTION ON BDD EXPRESSION
The data structure for representing the domain and the range is the wildcard expression in HSA [1]. If the given bit is the wildcard * , it represents a set that contains the values 0 and 1. However, the wildcard expressions will cause the explosion of expressions after union or minus operations even after Lazy Subtraction optimizing. The union and the minus operations may cause the explosion and the non-unique wildcard expressions. For example, The result can be written as two different wildcard expressions. Thus different wildcard expressions should be checked for the equivalence between two of them. The expression of the domain will be more and more complicated if the minus operation continues. AP [18] has discussed the desirable properties of BDDs [19] for representing header space. Therefore, we use the BDD expression to represent the header fields of the domain and the range. Thus, if the actions and the in-ports of r i and r j are the same in (3), the merging of r i and r j can be written as ( The number of nodes in Algorithm 1 Set the Given Fields to x(The Wildcard) for BDD (Set_x(BDD, mask_BDD)) Require: the processing BDD root node a, and the given mask BDD root node b. 1: if a < 2 or b < 2 then 2: 10 This operation can reduce the redundancy of rules with the same actions. The forwarding action does not do anything with the header space of packets. Since in the following sections, the forwarding action of a rule is implied in the matrix expression, we separate the modifying function from the rule function and only consider the composition of modifying actions and the domains and ranges on the header space. The parameters of the composition of modifying actions are computed with the wildcard expression for reducing the additional cost of BDD operations. We present the algorithm for computing the domain and the range of rules by the opposite ones with the rewrite actions in Algorithm 2. The basic idea is to set the rewritten bit to wildcard first and then narrow the set by the BDD of domain or the rewritten values. The rewrite action fills the modified fields with the given values. It is equal to the expression in APT [3]. We extend it to the inverse process for computing the domain by the range of the action. The inverse process fills the modified fields with the values of the corresponding fields of the domain. For example, if a rewrite action (11110000, 00000101) modifies a domain 10xxxx10, the result of setting wildcard is 10xxxxxx. The B rw is the BDD expression of xxxx0101, which represents the modified value. The final result is 10xxxxxx ∩ xxxx0101 = 10xx0101. If we narrow the range to 10x10101, the result of setting wildcards of the range is 10x1xxxx. The result of the reverse is 10x1xxxx ∩ 10xxxx10 = 10x1xx10.
Algorithm 1, which is used in Algorithm 2, is the BDD operation to set the given variable to wildcard. Although the basic idea is equal to the expression ∃x.P = P| x=true ∨ P| x=false in APT [3], the Algorithm 1 do the computation in a  basic iteration process of a BDD operation rather than using the ''restrict'' and the ''or'' operations a couple of times. The experiment in Section VI-A shows that Algorithm 1 is eight times faster. The required mask BDD of the above example is xxxx0000. The modified bit is set to 0, and the unmodified bit is set to the wildcard. If a node is required to be set to the wildcard, it will be replaced by the union of its child nodes. The reduce operation is done in the make node function. The structure of transform function with the BDD domain can also support the encapsulation and the de-encapsulation. The experiments do not include the encapsulation of a new instance of the protocol header. The details of the encapsulation and the de-encapsulation are described in Appendix B.

C. GRAPH AND MATRIX EXPRESSION
In SDN, as shown in Fig.1, rules are connected by the physical links between devices and the virtual links between flow tables. If two rules are connected by a link, and the range of the rule on the input side intersects the domain of the rule on the output side, the two rules are adjacent. A rule graph G R is a digraph that is constructed by (V , E), such that V is the set of rules, and E is the set of connections of the adjacent rules. Therefore, a directed edge between two rules represents the filter of the match fields and the action on the filtered packets between the inputs of the two adjacent rules. These directed edges can locate the links. VOLUME 8, 2020 As shown in Fig.2, we give a simple example of 5 rules. The rules are located in 5 tables. The edge e 12 , which is located at the link l 12 , represents the function of the connection between rules r 1 and r 2 in the example. We establish the adjacency matrix R = (r ij ).
The pair of two adjacent rules r ij means the directed edge from r j to r i . Thus, the forwarding actions are implied in the structure of the matrix. If the conflicts of rules are removed, the elements r ij of the adjacency matrix are the functions in header space. The function of initial r ij in header space is the same as the modifying action of the rule r j . The cols of R represent the connection from the corresponding rules to the next-hop rules. The domain and the range of r ij are narrowed by r i and r j . It is computed by where Y h,r j is the header space range of r j and X h,r i is the header space domain of r i . Thus each element of R represents the one-hop connectivity of two rules. The constructed adjacency matrix of Figure 2 is The rewrite parameter of f 12 is xxxxxx00, and the domain X f 12 is 00xxxx10. The rewrite parameter of f 21 is xxxxxxxx, and the domain X f 21 is 00xxxx10. The others can be computed in the same way. The product of two elements r jk and r ij is almost the same as the composition of two rules.
The production represents the connectivity from r k to r i , and the path goes through r j . The plus of two elements is defined as If h ∈ X ik ∩ X jk , the plus operation implies that there exists a copy operation of the packet with the header h. Therefore the multiplication of two adjacency matrices R 2 represents the 2-hops connection of rules. The R n is the n-hops connection matrix. An abstract element ''1'', which does nothing when multiplied by a matrix element, is defined as 1 • r ij = r ij • 1 = r ij . The identity matrix E is a diagonal matrix in which the elements on the main diagonal are equal to ''1''. The connectivity of entire rules is represented as a connectivity matrix where the N is the limit of max hops in the given network. The elements of R c represent connectivities of any corresponding two rules. The connectivity matrix of Figure 2 is where g 11 = f 12 f 21 , the rewrite parameter of g 11 is xxxxxx00, and the domain X g 11 is 00xxxx10. The other elements can also be computed using (7). The basic reachability of the two rules can be acquired by checking the corresponding element. For example, g 41 represents all the flows that go through from the input of r 1 to the input of r 4 and where the rewrite parameter of a 1 is xxxxxxxx, and the rewrite parameter of a 2 is xxxxxx00. It means that if a packet whose header belongs to 0010xx00 is delivered to the input of rule r 1 , the three copied packets will be delivered to the input of rule r 4 through the network.

IV. INCREMENTAL UPDATING A. PREPROCESSING DURING UPDATING
The purpose of the preprocessing during updating is to get the difference of the adjacency matrix after updating a rule. We use the method in our early work [20] to eliminate the conflicts and get the dominant part of the new adding rule and the covered part of the existing rules. As shown in Fig.4 for each link, we construct a link MTBDD on the link-out side. Each terminal node of the MTBDD represents the dominant rule on the link-out side. When a new rule is added, we hash the rule to a merged rule by the action set and the matched links of the rule. The generated MTBDD of the new rule is added to the link MTBDD by an MTBDD operation. When two terminal nodes meet, in addition to the basic operation, we also need to update the dominant BDDs on the link and the covered rule records of the two rules and update the dominant BDDs of the corresponding merged rules. Then the changes of the merged rule will be connected to the previous-hop rule and the next-hop rule. The result is the difference of the adjacency matrix after adding a rule. Deleting a rule is similar to adding one. When two terminal nodes meet, if two nodes are the same node, which represents the removing rule, return the ''false'' node. Otherwise, update the two rules and the corresponding merged rules. Besides, the covered parts of the lower priority rules that are covered by the delete rule should be re-added to the MTBDD.  Adding a new link creates the connection of the existing merged rules. Therefore, we should connect the merged rules between the adding link to get the difference of the adjacency matrix. The MTBDDs of the matched in-ports (the rules that match all in-ports in of a switch are preserved as an abstract in-port MTBDD) of rules are precomputed. When a new link rises, the link MTBDD can be got by the corresponding inport. Thus, we only need to compute the connection of the merged rules between the link and generate the difference of the adjacency matrix.

B. UPDATING THE MATRIX
For formal description, we construct two matrices L d,m×n , and L s,n×l to represent the outputs and the inputs of selected rules. For example, the in-link of the network in Figure 3 is constructed as l s,1 = f s,11 0 0 0 0 T , where the rewrite parameter of f s,11 is xxxxxxxx and the domain of f s,11 is 0010xxxx. The out-link of the same network is constructed as 15 , where the rewrite parameter of f d, 15 is xxxx0000 and the domain of f d, 15 is 00xxxxxx. The elements of L s construct an abstract rule which delivers given packets to the selected rules. In L d , these elements are initialed by the modifying action of the selected r i , which represents the connection between r i and l j . Therefore, the desired reachability checking is L s→d = L d R c L s . The result of the example network in Figure 2 with the l s,1 and l d,1 is a single element where the rewrite parameter of a 3 is xxxx0000, and the rewrite parameter of a 4 is xx110000.
The updating problem of rules is defined as computing L * s R * c L * d from R * , where R * is the updated adjacency matrix, and it is represented as: where x 1 and x 2 are n-dimension vectors. When a new rule is adding to the network, the covering part of the new rule is found by using the MTBDD based method. The R represents the change parts of existing rules after a new rule is added. Figure 3 shows an example of adding a new rule r 6 .
where the rewrite parameter of f ,53 is xx11xxxx, and the domain of f ,53 is also 000100xx. The x 1 is 0 0 0 0 f 56 , and the x 2 is f 61 0 0 0 0 , where the rewrite parameter of f 56 is 00xxxxxx, and the rewrite parameter of f 61 is xxxxxxxx.
The domain of f 56 is xx0100xx, and the domain of f 61 is also 000100xx.
Similarly, we get the updated L * s = L s l s T and If the new adding rule does not directly connect the L s and L d , the l s and l d are zero. When a new rule is adding into the network, we compute the changed part of the adjacency matrix. It is written as The product of two updated adjacency matrix is R * 2 = R 2 + RR ,1 + R ,1 R + R ,1 2 . Thus, the exponentiation of the adjacency matrix is computed by The updated R * c is got by R * c = E + N n=1 R * n and the result is where the R ,0 is The final checking is computed by L * d R * c L * s , and the result is L * s→d = L s→d + L * s→d = L s→d + L * d R ,c L * s . (13) Therefore, Algorithm 3 shows the process of updating new rules. In Fig 3, the R ,1 of new adding r 6 is The difference of the final checking result is the l * d,1 R ,c l * s,1 , and the element is

Algorithm 3 Update New Rules
Require: The new adding rule r, the maintained connection matrix R c , and the maintained checking result L s→d . 1: Compute R, x T 1 , and x 2 by using the MTBDD based method. 2: R ,1 ← using (9) 3: R * ← using (9) 4: for i = 2 to N do 5: R ,1 R i ← R ,1 R i−1 R 6: end for 7: R * c ← using (12) 8: L * s→d ← using (13) 9: Detect the desired invariants. 10: return R * c where the rewrite parameter of a 5 is 00xx0000. Deleting a rule is almost the same as adding it except that Link Up and Link Down: Similarly, when a new link rises, the MTBDD of this link should be established for computing the increment of the adjacency matrix. It is represented as R in (9). It can also be written as R ,1 since the new link does not add a new rule. The updated network is computed by Algorithm 3. When a link goes down, the process is the same as adding a new link. The MTBDD of this link is frozen for a fixed time for dealing with the fast recovery.

V. CHECKING INVARIANTS
The invariants are checking during the updating. In this section, we mainly describe the intra-domain problems and the inter-domain forwarding loop.

A. INTRA-DOMAIN INVARIANTS 1) REACHABILITY AND REACHABILITY VIA A WAY-POINT
The reachability is the fundamental property in network verification. The connection matrix reflexes the reachability from a rule to another rule. We check for the reachability constraints by constructing the proper L s and L d . The production L d R c L s represents the reachability between the given position with the desired constraints.
If we want to know the reachability from l 12 to l d with the limited headers 00xxxxxx, the corresponding The result of l d,1 R c l s,2 is an element with the function where the rewrite parameters are described in Appendix A. This function represents the reachability from l 12 to l d . If a packet with header 00100010 is sent to l 12 , the l d will receive three packets with the header 00100000. Reachability via a way-point is split into two successive checks of reachability. Suppose we wish to ensure that all traffic from l s to l d should go through the S 4 where the rule r 4 locates in. We compute the reachability f s→r 4 from l s to r 4 and the reachability f r 4 →l d from r 4 to l d . Then the composition f r 4 →l d f s→r 4 should be compared with the reachability from l s to l d that is f s→d . If they are equal, all the traffic from l s to l d do pass the S 4 .
If the path of the packets is needed, every rule associated with the in-port of a switch will be attached with a push action to push the switch index to the path. They can also be computed as an additional composition. As an example, the l d,1 R c l s,2 will become f s→d, 12 where the rewrite parameters of a s,2 and a s,3 are the same as the parameter of a 3 . However, the path adding the parameter of a s,2 is (S 1 , S 2 , S 1 , S 4 , S 5 ), and the path adding the parameter of a s,3 is (S 1 , S 4 , S 5 ). If a packet with header 00100010 is sent to l 12 , they are delivered to l d through 3 different paths and finally have the same header. In this way, the reachability via a way-point can be checked by the paths. However, the processing with paths decreases the performance of the composition of the two elements.

2) LOOPS AND THE LIMIT OF MAX HOPS
When a packet returns to a rule twice, the loop occurs. The loop in the network can be checked during the updating processing. The main diagonal of R n represents the reachability from a rule to itself by n-hops. Therefore, if the element of the main diagonal is not equal to ''0'', there are some packets that go through this rule to be transmitted to this rule again, which is called the loop. Definition 1 (Generic Loop): Suppose R is the adjacency matrix of rules of a given network. r n,ii is the i-th element of the main diagonal of R n . The n-hop generic loop of r i exists is defined as ∃h ∈ X n,ii , r n,ii (h) = ∅, where the X n,ii is the domain of r n,ii .
If the packets that a rule receives twice will go through the path again, it will be the infinite loop, which is defined in Definition 2.

Definition 2 (Infinite Loop): Suppose
is in the simplest form. The n-hop infinite loop of r i exists is defined as ∃h ∈ X n,ii,k k ∈ (1, 2, . . . , M ) let a n,ii,k (h) ∈ X n,ii,k . Therefore, after we get R c , the loop under N -hop can be detected by Theorem 3.
Theorem 3: Suppose the connection matrix R c and r c,ii is the main diagonal element of R c . r c,ii − 1 = 0 implies that there exists at least a GenericLoop from rule r i to itself. In addition for the simplest form of the r c,ii , ∃h ∈ X c,ii,k k ∈ (1, 2, . . . , M ) let a c,ii,k (h) ∈ X c,ii,k implies that there exists at least an Infinite Loop.
The details and the proof are presented in Appendix C. In (8), r c,11 − 1 is g 11 and g 11 = 0.
Thus, there is a loop from r 1 to r 1 . However, the range of g 11 is 00xxxx00 and 00xxxx00 ∩ 00xxxx10 = ∅. This loop is not an infinite loop. During the processing of adding a rule to the network, we get R ,c . The loop then can be checked by the main diagonal element of R ,c . The loop over N -hop implies maxhops failure. The max-hops failure is identified by checking whether the R N +1 has a non-zero element. During the Algorithm 3, we get the R ,N . Suppose the existing rules have been checked already, R N +1 = 0. By using (11), the R ,N +1 can be computed and used to checking the max-hops failure.

3) BLACK HOLES
A black hole is that a set of packets are dropped due to no entry matched. The basic leakage of a link can be detected during the preprocessing. If the sum of the domains of rules on the same in-port does not fulfill the all wildcards expression, there is a leakage on this in-port. We construct a fake rule which represents the ''leakages'' of every link. A vector l d, 3 , which represents the abstract ''leakage port,'' is built to extract the reachability from the rules or the links to the ''leakages.'' The reachability from the in-port of the network to the ''leakage port'' is called the black hole.
The l s,1 with the ''leakage'' rule is f s,11 0 . . . f s, 17 . The element of l d,3 R * c l s,1 is It means that the traffic from l s within xx10xxxx meets the black hole in this network. When a rule is adding to the network, the black hole is detected by the computation of l * d, 3 R ,c L * s . If the specific black hole position is desired, the more ''leakage'' rules should be constructed for each switch or each link.

B. INTER-DOMAIN FORWARDING LOOP
Loops can raise in both intra-domain and inter-domain routing. In Figure 5, there are two connected domains. The reachability from l 1 to l 3 and l 4 to l 2 of domain B and the reachability from l 3 to l 1 and l 2 to l 4 of domain A is computed by using the described intra-domain method. They are written as f B,31 , f B,24 , f A, 13 , and f A,42 . The links are treated as the vertexes, and the reachability between these links is treated as the edge. We establish the adjacency matrix of these links and the result is  The square of this matrix is Therefore if the main diagonal elements are not 0, there is a loop from l 1 to l 1 . We extend this situation to the multi-domains. As shown in Figure 6, there are more than two connected domains. A super controller of the topology of domains has been set to maintain the entire network. All the domains give the information of the reachability function from the in-domain links to the out-domain links to the super controller. The intra-domain information is only visible for the intra-domain controller. We establish the adjacency matrix L of the interdomain links in the super controller. The L is computed by using the described intra-domain method.   l 1→1 is checked by the Theorem 3 to identify a loop. However, it will take a long time to get the result.

VI. IMPLEMENTATION AND EVALUATION
In this experiment, we test all the functionalities and compare our model with NetPlumber [2] and APT [3] in the same environment. The performance of inter-domain loop detection has been compared with the method (using HSA) in [13].

A. IMPLEMENTATION
The NetV locates at the bottom of the controller. It extracts the rules which are generated by the applications and ready for installation. Then it merges those rules into a single policy. The defined invariants are checked during the updating processing. If the new adding rule violates the invariants, the error will be reported. The prototype of NetV has been written in C programing language. We use BDDs to present the domain and the range of a rewrite function. The BDD environment is using the BuDDy library [21]. The preprocessing that includes getting the initial connection of rules, removing the conflicts, and merging the rules with the same action has been done by using the MTBDD method in [20]. We rebuild some parts of the BuDDy library to make the library adjust to the MTBDD structure and the described algorithm on BDD headers. We use the compressed sparse row or col (CSR/CSC) data structure to store the matrices for the computational performance since the valid elements of the matrices are less than 2%.
We evaluate the performance and functionality of our prototype on Internet 2 network dataset and Stanford's backbone network dataset [1] from real networks. The Stanford dataset has 14 operational zones Cisco routers and 2 backbone Cisco routers. They are connected via 10 Ethernet switches. There are more than 757,000 forwarding rules, 100+ VLANs, and 1,500 ACL rules. The Internet 2 network has 9 routers with more than 120,000 IPv4 forwarding rules and 300+ MPLS rules. The basic OpenFlow rule sets are generated from these two rule sets by the parsing part of NetPlumber [2]. The number of OpenFlow rules is shown in Table 1.

B. PREPROCESSING OF UPDATING
First, we respectively use Algorithm 1 and the existential quantification operation by using restrictions in APT [3] for the whole process and record the processing time for each call. The result is shown in Table 2 and Table 3. On average, using Algorithm 1 to set the wildcards on BDDs is 8-10 times faster than using restriction operation on the two network rule sets. Since the ''or'' operation for lower level part of the BDD is faster than the ''or'' operation for the entire BDDs P| x=true ∨ P| x=false . Then we record the per-call processing     Table 4 shows that the average of the processing time for the rewrite function is 2-3 times than the set-wildcard function. The rewrite function contains a set-wildcard function and a BDD ''and'' operation. Thus, the set-wildcard function is almost the same order of magnitude as the BDD ''and'' operation.
Then we evaluate the performance of the preprocessing of the dynamical updating for getting the difference of the adjacency matrix by separately removing and adding back each rule in the rule sets. The updating process for NetPlumber can be divided into preprocessing part that crates pipes and deals with the cover relations when adding a rule and the propagating part for the previous hop rules. So, we use the preprocessing of NetPlumber to be the baseline. The time cost of preprocessing of the NetPlumber mainly depends on the amount of the flow table and the amount of the previous hop and next-hop flow tables. Thus for Fig. 7 and Fig. 8, per update preprocessing run time of the NetPlumber on Stanford network rules is even shorter than the run time on the Stanford network forwarding rules. The per update preprocessing run time of NetV depends on the number of the amount of the merged rules in previous-hop and next-hop tables and  the efficiency of BDD operation. Thus for the rule sets, the average per update preprocessing run time of the NetV on Stanford network forwarding rules is the shortest. As shown in Fig. 7 Fig. 8 and Fig. 9, the time cost of the per update preprocessing of NetV is no more than the preprocessing of the NetPlumber. Therefore, the preprocessing of NetV only increases the appropriate cost for the entire process, while it computes the precise connection changes of the merged rules when updating rules.

C. EVALUATION WITH NETPLUMBER
First, we generate the merged rules from the OpenFlow rule sets to evaluate the redundancy of the rules with the wildcard expressions. As shown in Table 1, we use the parsed data sets in NetPlumber [2]. The first one is the forwarding rule set extracted from the Standford backbone network. The rules in the same table are merged based on their actions, which means the plus operation of the rule functions in Section III. For the forwarding rule set, the number of the merged rules is almost 6% of the entire OpenFlow rules. It is even 0.5% for the Internet 2 rule set.
The NetV took 2.5s to generate the initial adjacency matrix R for Stanford's backbone network. It took 8.2s to get the result of R c . We run NetPlumber in the same environment.  It took 0.8s to create the initial plumbing graph and 131s to generate checking results with 16 source nodes connected to each router. For NetV with the same 16 ports, it took 702ms by iteratively multiplying the adjacency matrix to find all results, and it took 108 ms by directly using the R c . NetV took 23.7s to generate the initial adjacency matrix for Internet 2 network. The R c was computed in 1.1s. It took 88s to create the initial plumbing graph and 653s to generate checking results with 12 source nodes connected to the routers by using NetPlumber. By using NetV, it took 184ms by iteratively multiplying R and 27ms by multiplying R c . The result shows that the redundancy of the rules does increase the time cost of the reachability verification. NetV performs better even the reachability of the entire rules R c is computed.
We connect 16 source nodes for Stanford's backbone network and 12 source nodes for i2 to each router for Net-Plumber. These in-ports are also used to generate the L s for computing the final result L s→r . We test the per update run time by separately removing and adding back each rule in the rule sets. In Figure 10, the rule set is the forwarding rules in Stanford's backbone network. The solid represents the main process without the preprocessing. It is the propagation part for NetPlumber. The dashed represents the entire process of the rule updating. From the figure, NetV performs better than NetPlumber. The solid blue line starts with nearly 40% at the ordinate. It means that nearly 40% rules' updating in this experiment does not affect the reachability, which is also described by AP verifier [18]. A part of the solid red line is higher than the blue one since some rules affect the little number of flows, or they locate at the end of the flows from the source nodes. NetV gets the reachability of every two rules. Thus it can get more information, including the potential loops. From the result in Table 5, NetV is 70 times faster than NetPlumber on average. For the   main process, NetV is 48 times faster than NetPlumber on average.
The following two experiments on Stanford's backbone network and the Internet 2 network also indicate that the redundancy of rules would reduce the updating performance. The time cost of the preprocessing is negligible compared with the main process in the two experiments. In Figure 11, the length of the header is raised from 32 bits in forwarding rules to 128 bits. From the result in Table 6, the NetV is 13 times faster than the NetPlumber on average. For Internet 2 network, since the redundancy is vast, it is clear that the per update run of the NetV is faster than the NetPlumber in Figure 12. Table 7 shows that it is 75 times faster than the NetPlumber on average.
For testing the link update, we remove and add back each connected link for Stanford's backbone network and Internet 2 network. As shown in Table 8 and Table 9, the link per update run of NetV is hundreds of times faster than Net-Plumber. For NetPlumber, the number of the rules associated   with each link is always more than a hundred in Stanford's backbone network, and it is even more on Internet 2 network. Adding a new link equals adding the pipes between all the rules associated with the link and then propagating all the flows in front of these pipes. Therefore, it takes a long time to process the link update. For NetV, the numbers of the merged rules associated with each link are quite small. Thus, the link update of NetV is faster by incrementally updating the R c , which takes the connection of the merged rules as the difference R of the adjacency matrix. The loop detection is done during the updating process by checking the element of the main diagonal. Previous experiments also check the reachability from the 16 in-ports in Stanford's backbone network and 12 in-ports on Internet 2 network to the out-ports. Since the l d R ,c l s is always small and can be ignored compared with updating R c , the time cost of the incremental verification depends on the run speed of updating R c .
Above all, merging the rules with the same behavior in the same table does reduce the redundancy and speed up the rule updating process. The composition of the transform function in header space makes that the reachability information can be sustained to reason the reachability of the specific packets or the set of packets. For example, if the reachability from in-port 1 to out-port 2 has already calculated and we want to know the reachability of a specified set of packets from inport 1 to out-port 2, the symbol headers will be sent to the source node to be propagated again by using NetPlumber. By using NetV, we use the transform function from in-port 1 to out-port 2 to do one-time computation on the symbol headers. The matrix operation is an efficient way to operate and locate the desired functions and separate the location space and the header space in networks. However, since the composition only focuses on the substitution in header space, the paths are not recorded. If the path record is required, the transform function will be defined as the function of two exclusive spaces by adding actions to push the current path associated with the function to the path record. The composition of two functions contains the operation of jointing two path records. It will add the extra process time and sometimes increase the number of the functions if there are multiple paths between two points.

D. EVALUATION WITH APT
The key idea of APT is to precompute the atomic predicates and give each of them an integer identifier. The propagating time of the symbol header is strongly reduced by using these identifiers instead. Therefore, the computation of the APs takes the significant time of the entire verification process, especially for the incremental verification. We do not get the source code of the APT. AP Verifier [18] uses several optimization techniques to reduce verification time. Since the computation of the APs is the limit of the performance, we only write a program with C programing language to perform the APs generation based on the algorithms in their papers [3], [18]. We also have done the optimization to improve the performance of this program. APT converts the forwarding tables to the predicates of out-ports. In SDN, the rule provides the flexibility to perform the action for the specific in-port. Thus, the forwarding rules cannot be strictly translated to the predicates of the out-ports. Since the merged rules also remove the redundancy and represent the same behaviors, we use the domains of the merged rule functions to calculate the APs. Table 10 shows the number of the basic APs generated by the fillers of the merged rules without iteratively computing the transformed predicates. If the basic data set is in the Open-Flow format, the number of the APs is far more significant than the traditional model in their works. If we iteratively compute the transformed predicates and use the transformed predicates to compute the extended APs, the numbers of entire APs are 150,000+ for Standford network and 38900+ for Internet 2 network. The performance of the updating strongly depends on the numbers of the existed APs. For each update of a predicate P, the algorithm first computes the APs {P, ¬P} and then computes the updated APs by using the ''AND'' operation for every pair between {P, ¬P} and the maintained APs. Therefore, the time complexity of the onetime update is O(N * K ), where N is the number of the existed APs, and K is the number of the entire transformed predicates by the iterative algorithm. Figure 13 presents the average run time for different numbers of the existed APs for each update of a predicate P. The updating run time rises with the growth of the number of the existed APs. The average per update time of a predicate is 504ms after the number rises over the 20000 for the Internet 2 network. It is 1.3s after the number goes over the 50000 for  the Stanford network. If we take the iterative algorithm for the transformer into consideration, in the last 50% updating of the merged rules, half of the updates will be more than 2s for the Internet 2 network since the number of iterations is more than two and the number of transformed predicates is more than five. The number of transformed predicates of some updates is 700+, which leads to more than 200s to complete an update. For the Stanford network, some updates get 1000-3000 transformed predicates in the last 50% updating, which leads to more than 800s to update some merged rules. We then test the per rule update on the 273 APs generated by the merged rules for the forwarding rules of the Stanford network. As shown in Figure 14, NetV performs better than the AP verifier. The mean per rule update time is 3ms for APT, which is bigger than the 0.124ms for NetV. Thus, the APT is not suitable for the rule set of this experiment with OpenFlow format.
Furthermore, APT only design an algorithm to updates the set of atomic predicates and do not keep the number of atomic predicates to be smallest for incremental updating, which causes the instability of the number of atomic predicates when updating rules. For example, we assume the existed two predicates are 1x1x and xxx1 and the APs are {1x11, 1x10, 0xx1 + 1x01, 0xx0 + 1x00}. If adding a rule changes the predicate 1x1x to 1xxx, the updated APs are {1x11, 1x10, 1x01, 1x00, 0xx1, 0xx0} which contain 6 APs. If we use 1xxx at first as one of the two predicates, the computed APs are {1xx1, 1xx0, 0xx1, 0xx0}, which only contains 4 APs. We also do a test to compute the APs for forwarding rules of the Stanford network by updating each rule other than the merged rules. The result of the number of the final APs is 1515, which is much bigger than the 273 by using the merged rules.
APT performs very well in traditional networks. However, it has gaps with the rules in SDN for incremental verification unless the number of the generated APs is much lower than the rules. Overall, these results indicate that NetV performs better than APT in SDN for incremental verification.

E. EVALUATION OF TWO DOMAINS
In [13], they use HSA to check the inter-domain loop. For domains A and B, if the set of the headers on the connected link is H , the checking is to verify F A (F B (H )) = ∅, which is computed by using HSA, where F A and F B are the transfer function of the domains. Therefore, the incremental verification can be done by using NetPlumber to verify F A (F B ( H )) = ∅, where the H represents the change part on the connected links by updating. We split the Standford network into two parts. Each one has eight routers, and they are connected with two links. We select the filters of the rules in the switch, which is connected to the connecting link between the two domains to be the H . Then, we compute the F A (F B ( H )) for each filter by using NetPlumber. For NetV, the change L is generated from each filter, and then it is used to update the connectivity matrix for the two domains. Table 11 shows the experiment on the forwarding rules. NetV took 76us to perform the verification for each change on average with the forwarding rules, while NetPlumber took 1.56ms. If we only use the forwarding rules, the transform function between every two inter-domain links only has one expression with a domain and a range by using NetV, since they do not change the headers. It can be represented as and the adjacency matrix is represented as If f A changes with f ,A for a update and the max number of the domain hops is two, the final connectivity matrix is and the updated connectivity matrix is   Since f A f B = f B f A for forwarding rules, it only needs to compute one composition with one operation on BDD to get the result. For computing the F A (F B ( H )), NetPlumber needs to trace the symbolic headers H in both domains. Therefore, NetV performs better than NetPlumber for interdomain verification. Table 12 and Table 13 also show a better performance of NetV. The maintained reachability of the single domain is the L s→d . For each element in L s→d , it is the transform function with the simplest form, which eliminates the detail information of the paths inside the domain and reduces the number of computations. The inter-domain reachability is achieved by computing the composition of these transform functions rather than retracing the symbolic headers. Overall, these results and the analysis indicate that the dynamic performance and the scalability of NetV are better than NetPlumber for the verification with multiple layer controllers (multiple domains) in SDN.

VII. CONCLUSION
The main goal of this paper is to design a scheme for dynamic data plane verification in SDN. In this paper, we define the function in header space with the BDD domain and the operations which make the substitution behaviors of the rules can be merged or connected as an expression of the transform function. We also propose the updating method based on graph and matrix operation to maintain the reachability information of the entire network. The common network invariants are translated to the requirements on the matrix model and can be incrementally verified when updating rules. Finally, we attempt to extend the model to the multi-domains network for incrementally verifying inter-domain invariants. These experiments confirmed that the prototype NetV performs better than NetPlumber both on the static verification and incremental verification. NetV also performs better than APT on the rule sets with the OpenFlow format, which generates a large number of APs. At last, it is more flexible than the previous tools to scale to the large network and the multidomains network.
The results of this work indicate that treating the header change behavior as functions that can be calculated by the defined operations is reasonable. This research provides a framework for the exploration of the rule relations in SDN. A limitation of this study is that some network behaviors (like the TTL minus operation) have not been defined into the function operations. Another limitation is that we only test the computation part for the multi-domains network without the cost of the communication between controllers. What is now needed is to design the protocol and the compressed structure for transporting the BDDs between controllers. Further research might explore the features of the network reachability stored in the matrix by using the decomposition operation of the matrix. Table 14 shows the meaning of the symbols used in this paper. Table 15 shows the rewrite parameter of the transform function used in this paper. For example, the rewrite parameter is xxxxxx00 for a 1 . Thus the rewrite mask is 11111100, and the rewrite value is 00000000. If the symbolic header is 110xxxx1, the rewritten symbolic header is 110xxx00.

APPENDIX B ENCAPSULATION AND DE-ENCAPSULATION
We define a new function in header space and write it as a r that means to replace a given field by another field of the packet. The parameters of the a r are the set of relations for each bit, and two masks mask f and mask t can be acquired from the relation set. mask f indicates the position of the value, and mask t indicates the replaced position. The encapsulation operation is defined as the a e = a r wa r . It first moves the values to the blank bits and then fills the original bits with the new given values. The de-encapsulation operation only moves the values back to the original bits. Thus it is represented as the a r . For example, if we want to perform the encapsulation (like IPv4, and for MPLS, it is more like the rewrite action) for the first four bits on symbolic header 0010xx00 with 1111, the header space will be extended to 12 bits, and the header becomes 0010xx00xxxx. It first moves the 0010 to the extended bit, and the header will be the xxxxxx000010. Then the rewrite function with the parameter 1111xxxxxxxx takes action on the header, and the header will be the 1111xx000010. For de-encapsulation, we only need to recover the stored bits and let the header be 0010xx00xxxx by using the replace function.
For the composition of the transform functions, we can define that a transform function a t which always does the replace first has three parameters (rl, m rw , v rw ) that include the replace relations which means to replace the desired bits by the values of the given bits, the mask and the value of the rewrite. Thus, to compute the a t,c = a t,1 a t,2 is to compute  the (rl c , m c , v c ) from (rl 1 , m 1 , v 1 ) and (rl 2 , m 2 , v 2 ). For each bit of the vector h rl(h i ) = h j , if j → i associated with the condition that (˜mask f )|(˜mask t ) = 0, which means the replacing bit is not replaced by another bit in a replace function. The result is (rl 1 • rl 2 , m 2 &rl 2 (m 1 ), (m 2 &rl 2 (v 1 ))|v 2 ), where • represents the composition of two replace functions, ''&'' is the bitwise AND and ''|'' is the bitwise OR.

APPENDIX C THE LOOPS
In Section V-A2, we explain the Generic Loop, which is defined in Definition 1. In Definition 2, the simplest form equals i = j ⇔ a n,ii,i = a n,ii,j and i, j ∈ [1, M ] which means that there are no two same transform function expressions in r n,ii . The Infinite Loops are the subset of the Generic Loops in a network. The difference between the Generic Loop and the Infinite Loop is that if the packet is transformed by the same transform behavior again. Since m&v = 0 for a rewrite function, a m 2 = (m&m, (m&v)|v) = (m, v) = a m . Thus, if there exists an a n,ii,k satisfy the Definition 2, a n n,ii,k (h) = a n,ii,k (h) ∈ X n,ii,k , and the packet h is infinitely transformed by this transform function.
If we use the transform function definition in Appendix B for Encapsulation and De-Encapsulation, the Definition 2 should be modified to the statement that the n-hop infinite loop of r i exists is defined as ∃h ∈ X n,ii,k k ∈ (1, 2, . . . , M ) let a n,ii,k (h) ∈ X n,ii,k and a 2 n,ii,k (h) ∈ X n,ii,k . From (14), Since there is a condition that (˜mask f )|(˜mask t ) = 0 for a replace function, Thus, j → i, j = i ⇒ j → j in a replace function. Therefore, Thus, if ∃h ∈ X n,ii,k k ∈ (1, 2, . . . , M ) let a n,ii,k (h) ∈ X n,ii,k and a 2 n,ii,k (h) ∈ X n,ii,k , the packet h is infinitely transformed by this transform function.
Proof (Theorem 3): Because of (7), r c,ii = 1 + N n=1 r n,ii . In R c of a network, every element is positive, and only in the process of removing some part of the existed rules, there exists the negative (minus operation) for the moment. Then r c,ii − 1 = 0 ⇒ ∃n ∈ [1, N ], r n,ii = 0. Therefore ∃h ∈ X n,ii , r n,ii (h) = ∅, and by Definition 1, there exists at least a Generic Loop.
YANG FANG was born in Yichang, Hubei, China, in 1988. He received the B.S. degree from the School of Electronic and Information Engineering, Tianjin Polytechnic University, China, in 2010. He is currently pursuing the Ph.D. degree with the South China University of Technology, China. His research concerns software-defined networks, network verification, network testing, and formal methods.
YIQIN LU received the B.S., M.S., and Ph.D. degrees from the South China University of Technology University in 1990, 1993, and 1996, respectively. From 1994 to 1996, he was a Visiting Student with the City University of Hong Kong. Since 1996, he has been with the South China University of Technology, where he is currently a Professor and the Director of the Network and Data Center. His research interests are in various applied topics including software-defined networks, network function virtualization, the Internet of things, and network security.