A Revised Trajectory Approach for the Worst-Case Delay Analysis of an AFDX Network

AFDX (Avionics Full-Duplex Switched Ethernet) standardized as ARINC 664 is a major upgrade for safety-critical applications in avionics systems. Worst-case delay analysis of all the flows transmitted on the AFDX network is mandatory for certification reasons. Different approaches have been proposed for end-to-end (ETE) delay upper bound, such as network calculus and trajectory approach, but these methods still introduce some pessimism in the computations and overestimate the exact worst-case delay of the flows. In addition, the existing trajectory approaches may underestimate the exact worst-case delay of the flows for some corner cases. In this article, we revised the trajectory approach to make the flow analysis more accurate for the computation of the worst-case ETE delay. The results had been shown that the worst-case delay analysis of an AFDX network can be improved by using our revised trajectory approach.


I. INTRODUCTION
The distributed real-time systems have widely spread in many domains, such as industrial automation and telecommunication, and their increasingly widespread use leads to a growing number of embedded functions, involving an exponential increase of exchanged data and connections [1]. In order to cope with this problem, the concept of shared computation and network resources has been developed. Real-time capable networks offering large bandwidth capacity over a shared medium have been proposed in the avionics field, such as the Avionics Full-Duplex Switched Ethernet (AFDX) [2], used as a backbone network in most of the recent civilian aircraft. Nevertheless, AFDX still exists the indeterminism problem which requires worst-case latency analysis, as guaranteed upper bounds of end-to-end (ETE) delays for messages transmitted over an AFDX network are mandatory for certification reasons.
Several approaches have been developed in order to determine guaranteed upper bounds of ETE delays, such as network calculus (NC) and trajectory approach (TA). Even though upper bounds are sufficient for certification purposes, The associate editor coordinating the review of this manuscript and approving it for publication was Nan Zhao . they often imply overestimation when used for network dimensioning. Existing trajectory approaches overestimate the worst-case end-to-end delays for the messages in some corner cases and underestimate them in some other corner cases. The contribution of this paper is to propose a revised trajectory approach (RTA) for computing ETE delays in an AFDX network. It can reduce some overestimation (also called pessimism) in the computations and deal with the underestimation (also called optimism) problem, and hence improve the worst-case end-to-end delay upper bound of the flows.
This paper is organized as follows. In Section II, we present the existing worst-case ETE computation methods. The detailed context of AFDX is given in Section III. The trajectory approach is presented in Section IV. A revised trajectory approach in the context of a FIFO and FP/FIFO policy is demonstrated in section V. Finally, experiments and comparisons are conducted in Section VI. The conclusion is presented in Section VII.

II. RELATED WORK AND MOTIVATIONS
Different methods have been proposed to determine the worst-case end-to-end delay for the messages of the AFDX networks in recent years. A simulation method and a stochastic network calculus approach have been proposed for the computation of the distribution of the delay of a given flow, but they do not cope with the worst-case analysis problem considered in this paper. As the current reference certification tool, network calculus [3] has been applied in the AFDX networks in civilian aircraft. But the pessimism of the obtained upper bounds was difficult to quantify. Although a stochastic network calculus approach [4], [5] has been proposed, the results are not yet relevant to real-world systems. The model-checking approach presented in [6] computes an exact worst-case delay of each flow. But it cannot scale up with large AFDX configurations due to the combinatorial explosion. Although some work [7] has been devoted to reducing the state space, it still does not cope with the real AFDX configurations. The holistic approach [8] has also been used to compute the local delays, but it considers a worst-case scenario (possibly unreachable) in each node visited by a flow. The forward end-to-end delay approach [1] is proposed, which computes an upper bound for the end-to-end delay for the AFDX networks based on the maximum backlog of single priority traffic.
The classic trajectory approach (CTA) has been defined in [9], [10]. CTA has been optimized in an AFDX context to provide an upper bound for the end-to-end delay encountered by any frame of a flow traversing a data network. And the optimized trajectory approach (OTA) [11], [12] has taken into account the impact of the serialization of flows sharing the same link. The pessimism of OTA has been analyzed in [13]. Recently, several research studies have focused on improving the serialization [14] and optimization [15] of OTA for the end-to-end delay analysis. Besides, the trajectory approach had also been used on deterministic scheduling in networkon-chip. In addition, an extended trajectory approach had been used to analyze deterministic delay of AVB switched ethernet networks [16]. However, a counterexample [17] has been given, which shows that OTA with current form can bring an underestimation problem in some corner cases. The source of the underestimation problem has been discussed in [17], [18] and a solution has been proposed in [18].
The limitations in the existing trajectory approaches motivate revising trajectory approach for improving worst-case ETE delays in packet networks. Compared with the existing trajectory approaches, the most significant contribution of our revised trajectory approach is to sum the maximum net delay introduced by each node on the path of the flow under study. Besides, the revised trajectory approach considers the frame constraints in the serialization effect. This improvement has shown to produce less pessimistic ETE delays for existing trajectory approaches and solve the underestimation problem at the same time.

III. THE AFDX NETWORK CONTEXT
Avionics Full-Duplex Switched Ethernet (AFDX) [2] is a switched Ethernet network taking into account avionic constraints. The ingress and egress points of the AFDX networks are called End Systems (ES). Each output port of each ES is connected to exactly one switch port by a full duplex physical link. The end-to-end traffic is mapped on logical communication channels called virtual links (VL). Fig. 1 depicts a sample AFDX configuration [1]. It is composed of nine end systems interconnected by six switches. A VL is a logical, unidirectional connection from one source ES to one or more destination ES. For example, in Fig. 1, VL τ 8 is a multicast VL starting from ES 5 with the following The routing of each VL is statically defined. Each VL flow is characterized by the Bandwidth Allocation Gap (BAG) [19], which is the minimum duration between two consecutive frames of the flow (τ i ), as well as a maximum frame size (L max ) and minimum frame size (L min ). Each flow τ i following the path P i has a BAG, denoted T i . Each flow τ i has a maximum release jitter J i , an end-to-end deadline D i and a maximum processing time C i on each node h, with h ∈ P i . Moreover, there are neither collisions nor packet losses on the links.
Each node operates traffic shaping and policing units to guarantee the BAG. An AFDX switch that operates as storeand-forward between input and output ports applies traffic policing and routing frames to one or several outputs ports. An AFDX switch has a FIFO buffer at each output port and no input buffer. The technological latency of an AFDX switch is estimated to be 16 µs. The servicing rate of an output port is constant (mostly 100 Mb/s). There is no global clock or synchronization protocol, so switches and end systems are fully asynchronous. All the above constraints make it possible to have a deterministic network based on Ethernet through an ETE delay analysis of the flows.

A. THE COMPUTATION OF THE TRAJECTORY APPROACH
The trajectory approach is proposed to compute a worstcase end-to-end delay of any flow transmitted on a switched Ethernet network for the First In First Out (FIFO) scheduling and non-preemptive Fixed Priority (FP) scheduling. The trajectory approach is based on the busy period concept. A busy period of level L is defined by an interval t, t such that t and t are both idle times of level L and there is no idle time of level L in t, t . An idle time t of level L is a time such as all packets with the priority greater than or equal to L generated before t have been processed at time t. The trajectory approach identifies the busy period and recursively compute the latest start time of considered flow τ i on its last VOLUME 7, 2019 node. According to the trajectory approach, the worst-case end-to-end delay upper bound R i of flow τ i is computed by the equation where last i is the last visited node of flow τ i and W last i i,t is a bound on the latest starting time of a packet m generated at time t on its last visited node. The definition of W last i i,t is given in [12].
Term (2) corresponds to the processing time of packets from flows, crossing the flow τ i , with a fixed priority level equal to this of τ i and transmitted in the same busy period as m. A i,j denotes the maximum difference on the delay between their source node and their first shared output port for flows τ i and τ j . A i,j is calculated as follows.
where M h i denotes the earliest arrival time of the first packet that will delay the frame m on the output port of the node h. Smax h j and Smax h j is the maximum and minimum time taken by a frame of flow τ j to go from its source node to node h, respectively. Smax h i is similar to Smax h j . J j,switch is the jitter associated to VL j entering a switch node h.
Term (3) corresponds to the processing time of packets from flows with a fixed priority level higher than this of τ i . B i,j denotes the maximum difference on the delay between their source node and their last shared output port for flows τ i and τ j . B i,j is calculated as follows.
Term (4) represents the processing time of the longest packet for each node of path P i except the last one.
Term (5) corresponds to the switching latencies along the path P i . The switching latency at each switch is considered as an upper bounded constant L max .
Term (6) corresponds to the maximum delay caused by lower-priority flows due to with a fixed priority lower than this of τ i . Term (7) refers to the serialization effect, which is the duration between the beginning of the busy period and the arrival of the first packet coming from the preceding node in P i , i.e. h − 1.
Term (8) is subtracted since W last i i,t is the start time of frame f i transmission at the output port last i .

B. PESSIMISTIC PROBLEMS IN THE TRAJECTORY APPROACH
The worst-case scenario built by CTA [9], [10] is that the arrival time of every packet joining the trajectory of the packet m under study in the node h is postponed in order to maximize the waiting time of m in h. A packet joining m in h can not arrive before the first packet coming from the same previous node as m, thus the term (8) is null in [9], [10]. The pessimism is generated because the computation of CTA is done based on pessimistic assumption that all the frames of different flows arrive at the same time. Unlike CTA, OTA [11], [12] considers that all flows from different virtual links sharing a common physical link can not arrive at the same time and need to be serialized. Then the beginning time of the busy period on node h is not always M h i . Compared to CTA, OTA gets a tighter end-to-end delay upper bound by considering the serialization effect. But it may still lead the pessimism problem in some corner cases. The computation of the serialization effect does not consider the BAG constraint between the two consecutive frames from the same virtual link according to the traffic policy. Moreover, it is very rare that no flow leaves the path of the flow under study in an AFDX network [20]. Some flows join the flow under study, share part of the path and then leave. This implies that the pessimism problem may occur [21]. The pessimism problem still exists in the current trajectory approaches [10], [12], [13].

C. UNDERESTIMATION PROBLEMS IN THE TRAJECTORY APPROACH
Recently, a counterexample given in [17] shows that OTA can lead to computing underestimation delay upper bounds for some corner cases. The underestimation problem is introduced since the subtraction of serialization factors is partially overlapped with the subtraction of the time interval t in (1). A solution [13] is proposed to solve this problem by taking into account the subtraction of time t in the serialization factors. However, as shown in Fig. 2, an input sequence(seq 0 ) from an input port may contain two frame subsequences (sbq 0 0 and sbq 1 0 ) separated by an idle time. The existing trajectory approaches do not consider this. Actually, M h i is neither the beginning time of the input sequence under study, nor the beginning time of the output sequence. As M h i is an inaccurate input to the existing trajectory approaches, then the end-toend delays computed by the existing trajectory approaches  are pessimistic for some corner cases and optimistic in some other corner cases.

V. REVISED TRAJECTORY APPROACH A. WORST-CASE SCENARIO ON AN INPUT PORT
Filtering and policing function is implemented by an AFDX switch to filter all frames arriving at an input port. Here, we assume that the AFDX switches implement a frame-based traffic policing algorithm. The specification of the framebased traffic policing algorithm [2] is as follows.
1) If Frame ACcount (also called AC i ) is greater than the maximum frame size (FS max i ) for τ i , the frame is accepted and , the frame is discarded and Frame AC i is not changed.
For a single flows τ i , the worst-case scenario of an input port is depicted in Fig. 3. The minimum duration between the first frame and the second frame is BAG−J i and the minimum duration between any two other consecutive frames is BAG. Frames from τ i with this time order can maximize its delay contribution in a time interval [0, t].
For the aggregation flows, frames from each flow with the above time order arriving at the same input port generates the worst-case scenario. However, to compute an exact serialization effect, we must consider all possible transmission orders of the frames of the sporadic flows sharing the same physical link [20]. This will generate a huge number of scenarios at each node. To reduce the computational complexity, we simplify the computation of the serialization effect. Each flow arriving the same input port is assigned the offset equal to zero. Frames from the same flow are restricted by the arriving time order. If the transmission time interval of a frame overlaps with that of other frames, then these frames  need to be serialized. For example, in Fig. 4, there are flows (τ 1 , τ 2 , τ 3 ) arriving at the same input port. As the transmission time interval of the first frame of τ 1 is identical with that of τ 2 and τ 3 , then the first frames of τ 1 , τ 2 and τ 3 need to be serialized. Since there are no conflicts between other frames with the current time order, the time order of other frames keeps unchanged. Finally, the worst-case scenario of the input aggregation sequence is obtained to compute the worst-case ETE delay.

B. REVISED TRAJECTORY APPROACH
In this section, we describe our revised trajectory approach in detail. Firstly, two theorems are given as follows.
Theorem 1: The servicing rate of the input link is equal to that of the output link, then the maximum transition cost (mTransCost) from one busy period (including n packets) of the input link to that of the output link is the processing time of the longest packet for the last frame.
Proof: As is shown in Fig. 5, there is a sequence including n frames on the input link and output link of a node. The processing time of i th frame is denoted as The start transmission time and end transmission time of x i on the input link is denoted as a i and b i . The start transmission time and end transmission time of x i on the output link is denoted as c i and d i . By the definition of the busy period, there is no idle time between any two consecutive frames of an input sequence, then According to the store and forward strategy, the start transmission time of i th frame on the output link must be larger than the end transmission time of i th frame on the input link. Then, To maximize the transition cost, the first frame has to be one of the longest packets. Then the maximum transition cost (mTransCost) is obtained as follows.
Finally the theorem is proven. In fact, the transition cost for the last frame is irrelevant to the frame transmission order and is only determined by the processing time of the longest packet. To maximize the worst-case end-to-end delay of the last frame under study, one of the longest packets is firstly transmitted on the output link. The conclusion of the proof also conforms to the above Term (4).
Theorem 2: If the servicing rate of the input port is smaller than that of the output link, then the maximum transition cost (mTransCost) is the processing time of the longest packet on the output link for the last frame. If the servicing rate of the input port is larger than that of the output link, then the maximum transition cost is the accumulative processing time of all packets on the output link minus that of all packets on the input link, and plus the processing time of the longest packet on the input link for the last frame.
Proof: As is shown in Fig. 5, there is a sequence including n frames on the input link and output link. Let x i and y i (1 ≤ i ≤ n) be the processing time of i th frame on the input link and output link, respectively. The start transmission time and end transmission time of i th frame on the input link is denoted as a i and b i . The start transmission time and end transmission time of i th frame on the output link is denoted as c i and d i . Then, Let B IP and B OP be the servicing rate of the input link and output link, respectively. We distinguish between two cases: If there is no idle time between any two consecutive frames on the output link, then the first frame is easily proved to be the longest packet. Then the transition cost for the last frame is obtained If there are some idle times between two consecutive frames on the output link. Then for the last idle time (idle last ) locating between x q and x q+1 , we have According to (17) and (18), it is obtained.
For the sake of convenient calculation, mTransCost is viewed to be the processing time of the longest packet on the output link for the last frame. Then, ad(ii): B IP > B OP In this case, we have By (21), it is concluded that TransCost is maximized when the first frame is the longest packet. Thus, Hence the theorem is proven. Corollary 1: If there are some idle times between two consecutive frames of an input sequence and the servicing rate of the input link is larger than that of the output link, then the maximum transition cost is the accumulative processing time of all packets on the output link minus the length of the input sequence on the input link, and plus the processing time of the longest packet on the input link for the last frame.
Proof: According to the proof of Theorem 2, it is obtained that the maximum transition cost is the accumulative processing time of all packets on the output link minus the length of the input sequence on the input link, and plus the processing time of the longest packet on the input link for the last frame.
The FIFO context As is illustrated in Fig. 2, the packet m of flow τ i under study is sent from the input link IP 0 to the output link OP h in a busy period bp h . Sequence seq i (1 ≤ i ≤ k) contains the frames of bp h coming from IP i . f (h) is the first packet which is processed in bp h . p(h−1) is defined as the first packet which is processed in bp h and comes from node h − 1. In a FIFO context (one single priority level), the frames of flow τ j with a fixed priority level equal to this of τ i under study can delay m if they arrive at the node h after the earliest time a h f (h) and before the latest time t + Smax h i . Then the estimation interval on the node h is a h f (h) , t + Smax h i for m. Let Q h be the net delay generated by all the concurrent flow on the node h. Then, Based on the concept of the Theorem 1 and Theorem 2, we have the following Theorem 3.
Theorem 3: Assume that there are k input links on the node h with the servicing rate of each input link equal to that of the output link. Each input link has a input sequence including n subsequences. Then the maximum net delay Q h on the node h for the packet m is as follows.
Proof: By to the definition of the busy period, there is no idle time within any two consecutive frames of a output sequence. It is easy to obtained as follows.
where sbq j i denotes the j th subsequence from i th input link. It is noted that any subsequence from any input link has no idle time between any two consecutive frames. seq i denotes the duration of the aggregation flows from i th input link and may contain some idle times. a h f (h) is computed by (26).
where l x (0 ≤ x ≤ k) denotes the duration of the input sequence seq x without its first packet. So, (23) transforms as follows.
Hence the theorem is proven. Generally, a h f (h) is not known at the beginning of the calculation. To maximize Q h , we need to maximize interference packets and minimize the serialization effect. Q h is viewed as a monotonically increasing function of a h f (h) . Firstly, it is obviously that Smax 1 i = 0 at the source node of the frame m. Then, we continue to compute the maximum time t + Smax 2 i at the source node of the frame m. The maximum net delay Q 1 max on the source node is easily obtained. The length of the sequence seq 0 for frame m under study at the output link of the source node is the sum of frames joining the frame m under study and need to be recorded. At the second node, we consider the worst-case scenario of all the input links. The initial value of a 2 f (2) is set to be t + Smax 2 i . Then a search within the interval −∞, t + Smax 2 i is performed for a maximum value of Q 2 . If Q 2 = Q 2 max is satisfied, then the search continues to be implemented. Once a 2 f (2) is minimized under the condition of Q 2 = Q 2 max , the search stops. Repeat the above computation process until the last node visited by the frame m. Finally, the worst-case end-to-end delay of τ i can be obtained by (28).
The FP/FIFO context As is illustrated in Fig. 6, in a FP/FIFO context (multiple priority levels), frames of flow τ j with a fixed priority level higher than this of τ i under study can delay τ i if they arrive at the node h after the earliest time − C m for τ j . If a sequence from an input link contains some packets with a higher priority than m and some other packets with a priority equal to m, then the sequence need to be separated into two sequence, such as hsep 1 and sep 1 from IP 1 in Fig. 6. In a FP/FIFO context, Q h is calculated as follows, where hsbq j i denotes the j th subsequence of the input sequence hsep i with a higher priority than m from i th input link. It is noted that then the sequence hsep i and seq i from the same input link need to be separated into two sequence to compute max 1≤x≤k l x . Finally, the worst-case end-to-end delay of τ i in a FP/FIFO context can be obtained by (29)-(33).
Theorem 4: Let m be the packet of flow τ i generated at time t. When the flows are scheduled FIFO or FP/FIFO under the worst-case scenario described in Section V, we have for any time t ≥ J i,switch : Proof: According to the definition of Q h , it is easy to obtain.
Assume that any busy period S h c , T h c on each node h of the path P i is selected for a packet m and R i is obtained. The busy period S h m , T h m on each node h is captured by RTA for VOLUME 7, 2019 a packet m and R RTA is obtained. Then, we have In the first (source) node, there is no serialization effect. It trivially holds.
In the second node, it trivially holds.
In the third node, we distinguish between two cases: (i) , the estimation interval on the third node transforms as follows. Then, It is easy to obtain.
In the last node, we can easy to obtain delay.
C. COMPARED WITH CTA AND OTA According to OTA, the net delay generated by all the concurrent flow on the node h in a FIFO context is calculated as follows.
The serialization effect h given by Term (48) is obtained as follows.
With all the same input subsequences given, we compare OTA and RTA. The difference d 1 between Q h OTA and Q h RTA is given as follows.
According to (50), Q h OTA is larger than Q h RTA . Generally, considering the frame constraints from the same flows on the node h, the accumulative inference of the input sequences captured by OTA may be more larger than that of those sequences captured by RTA. Compared to OTA, RTA can compute more exact worst-case ETE delay. Besides, (50) conforms to the conclusion described in [14]. As the idle times included in the input sequence, RTA can compute more exact worst-case ETE delay than the method proposed in [14].
Compared to OTA, the net delay (Q h CTA ) computed by CTA has Term (43-46) except Term (47). With all the same subsequences given, we compare CTA and RTA. The difference d 2 between the net delay (Q h CTA ) calculated by CTA and Q h RTA is given as follows.
According to (51), Q h CTA is larger than or equal to Q h RTA . Compared with CTA, the maximum pessimism reduced by RTA is ( max 0≤i≤k (min(l x )) − min (l 0 )). Finally, the worst-case ETE delay obtained by CTA is larger than or equal to that obtained by RTA.

VI. CASE STUDY A. CASE1
We first consider the counterexample given in Fig. 7, which is discussed in [17], [18]. The flow temporal characteristics are given in Table. 1. Here, the switching latency is considered as null. The transmission priority of all flows is identical. Frame f 1 which is under study follows the path P = ES 2 − S 3 − ES 7 . It can be delayed by frames of flows τ 2 , τ 3 and τ 4 at the output port of S 3 . J j,switch is set to be 40µs.
Let's use the RTA to compute the ETE delay of frame f 1 . Since f 1 is the only flow emitted by ES 1 , then S max ES 3 1 = 40µs and Q ES 1 max = 40µs. Then, we need to compute the value of A i,j for each flow joining the frame f 1 . The length of the input sequence at the input port of S 3 is 40µs. We consider the worst-case scenario at each input port of S 3 , as is shown in Fig. 8. We implement the search to calculate the maximum net delay on the node S 3 for frame f 1     path followed by frame f 1 is accumulated to be 180µs. This result is the same as hand built(180µs) and CTA (180µs), higher than OTA (160µs) and smaller than NC (200.19µs).
Based on the results of the above analysis, the underestimation problem is solved by the RTA. It's noted that the optimism of OTA exists under the condition of J j,switch = 40µs. If J j,switch is set to be 0µs, then the underestimation problem produced by OTA disappears. In addition, the arriving time gap between the frame f 4 and f 4 in the worst-case scenario given by Fig. 8 [18] cannot less than 80µs, as they do not consider the frame constraints from the same flow on the input port.

B. CASE2
We experiment with the configuration depicted in Fig. 1. The servicing rate of output ports is equal to 10 Mb/s for ES 1 , ES 3 , ES 4 , S 2 (to S 4 ), S 3 (to ES 6 ) and 100 Mb/s for other nodes. The characteristics of the flows are given in Table. 2. J j,switch is set to be 0µs. The global load of traffic encountered by some flows (e.g., τ 7 ) on their path is strictly larger than 1, which prevents the convergence of the computation with the current TA. But it does not matter for RTA. The worst-case ETE delay computed with NC, FA [1] and RTA are presented in Table. 3 and Fig. 10. The exact delay (ED) computed by hand built also presented. As is shown in Fig. 11, the ETE delay bounds obtained by RTA are tighter than by NC, and FA. Compared to NC, the difference is up to 561.30% for   flow τ 8 (ES 9 ) and τ 9 (ES 9 ). Compared to FA, the difference is up to 441.05 % for flow τ 8 (ES 9 ) and τ 9 (ES 9 ).

C. CASE3
In this case study, we experiment with the topology configuration depicted in Fig. 7. The experiment network contains 100 flows, each of which is configured by a random length and BAG. The statistical information of the load is shown in Table. 4. The three approaches (NC, CTA, OTA and RTA) have been applied to the network configuration. In Fig. 12, the worst-case ETE delays of flows have been computed with NC, CTA, OTA and RTA. We compare RTA with NC, CTA and OTA. For example, the worst-case end-to-end computed by NC and CTA for VL 1 is 1922µs and 1362µs. With the grouping optimization, the worst-case end-to-end delay computed by OTA for VL 1 is 1072µs. By using RTA, the worstcase delay for is only 994.8µs. Compared with NC, CTA and OTA, the gain obtained by RTA is 50.06%, 26.96% and 7.2%. For other VLs, the pessimism in percentage between the worst-case ETE delays obtained by RTA and other methods is shown in Fig. 13. The upper bounds computed by RTA are   on average 41.1%, 14.34%, 5.63%, less pessimistic than the upper bounds computed by NC, CTA and OTA, respectively. Different from the existing the trajectory, RTA accumulates the maximum net delay of each node visited by the packet m under study. Besides, the frame constraints from the same flow and the idle times in the input sequences are considered. Thus, RTA achieves less pessimistic upper bounds than other existing trajectory approach.

VII. CONCLUSION
We have proposed a revised trajectory approach for computing worst-case ETE delays of flows in an AFDX network. The proposed method is applicable to obtain exact bounds.
It can reduce some pessimism compared to current trajectory approaches and eliminate the underestimation problem introduced by OTA. Compared to other approaches (NC and FA) on a case study, the results indicated that tighter bounds can be obtained by RTA than NC, and FA. Finally, applying the RTA method to other kinds of architectures, like Time-Triggered Ethernet, would be of the highest interest.