Worst-Case Latency Analysis for AVB Traffic Under Overlapping-Based Time-Triggered Windows in Time-Sensitive Networks

Deterministic and low end-to-end latency communication is an urgent demand for many safety-critical applications such as autonomous vehicles and automated industries. The time-sensitive network (TSN) is introduced as Ethernet-based amendments in IEEE 802.1 TSN standards to support time-triggered (TT) traffic in these applications. In the presence of TT flows, TSN is designed to integrate Audio/Video Bridging (AVB) and Best Effort (BE) traffic types. Although AVB traffic has a lower priority than TT, it still requires low and deterministic latency performance, which may not be guaranteed under strict predefined TT scheduling constraints. For this reason, a window-overlapping scheduling algorithm is recently proposed in different works as analytical forms for TT latency under overlapping-windows based. But worst-case AVB latency evaluation under overlapped TT windows is also essential for critical optimizations and tradeoffs. In this paper, a worst-case end-to-end delay (WCD) for AVB traffic under overlapping-based TT windows (AVB-OBTTW) algorithm is proposed. Separate analytical models are derived using the network calculus (NC) approach for AVB-OBTTW with both non-preemption and preemption mechanisms. Using an actual vehicular use case, the proposed models are evaluated with back-to-back and porosity configurations under light and heavy loading scenarios. For specific AVB credit bounds, a clear WCD reduction has been achieved by increasing the overlapping ratio (OR), especially under back-to-back configuration. Preemption and non-preemption modes are compared under different loading conditions, resulting in lower WCDs using preemption mode than non-preemption, especially with porosity style. Compared to the latest related works, AVB-OBTTW reduces WCD bounds and increases unscheduled bandwidth, leading to the highest enhancements with the maximum allowable OR.


I. INTRODUCTION
Deterministic and low latency is a critical and significant design requirement to support urgent real-time applications, such as autonomous vehicles and automation industries.
The associate editor coordinating the review of this manuscript and approving it for publication was Fung Po Tso .
Missing this need may lead to dangerous situations for humans or society. For this purpose, several technologies have been introduced to support related applications. For example, the Ethernet network was proposed recently to be an appropriate communication environment as it has sufficient bandwidth and low cost for such applications. Although multiple protocols have been previously introduced based on Ethernet, such as Audio/Video Bridging (AVB) Ethernet and time-triggered (TT) Ethernet, none of which can provide safety-critical transmission requirements. After adding several extensions to TT-Ethernet, the timesensitive network (TSN) framework is presented in IEEE 802.1 TSN standards to manage and integrate safety-critical applications. These extensions include network management, synchronization, traffic scheduling and reliability aspects to guarantee no congestion loss, extremely low jitter, and deterministic end-to-end latency for time-triggered traffic [1]. With TT traffic, the TSN framework serves AVB (as soft real-time traffic), and Best Effort (BE) traffic without QoS guarantees. These features attract many relevant experts and companies to adopt TSN technology.
Many TSN standards have been presented to manage and control mixed-criticality traffic environments. For the scheduling aspect, the time-aware shaping (TAS) technique is defined in IEEE 802.1Qbv [2] to control TT forwarding according to a time-gating system defined in the gate control list (GCL) schedule in each network node. The predefined schedules (GCLs) are globally synchronized to specify the open/close time intervals for each priority queue at the egress port. The synchronization constraints for end-to-end connections are introduced in IEEE 802.1AS [3]. Based on the GCL pattern, the credit-based shaping (CBS) mechanism is used to share available bandwidth between AVB and BE flows, as defined in IEEE 802.1Qav [4].
Implementing an effective traffic schedule (GCL) for all selected nodes in the transmission path is a critical and complicated problem. The complexity induces not only from how to meet time-triggered requirements but also unscheduled critical time traffic (AVB) must be considered. AVB flows require enough bandwidth to ensure low and deterministic overall latency. Accordingly, several GCL implementations have been presented to guarantee TT requirements with considering AVB traffic. One of the attractive design ideas is to allow TT windows to overlap in each node leading to more available bandwidth for unscheduled traffic, as suggested in [5], [6]. The TT overlaps have been optimized in [7], [8] to ensure worst-case TT latency requirements. Together with the TT QoS needs, the worst-case performance evaluations for AVB traffic are essential under considering TT impacts. These evaluations assist the interested TSN designers to implement appropriate GCLs for each targeted use case. All the previous worst-case AVB latency evaluations have been implemented based on complete isolation between TT windows, as in [9]- [11]. Thus, a comprehensive view of the worst-case AVB latency performance under overlapping-based TT windows is essential to make critical tradeoffs with TT evaluations, resulting in an appropriate GCL design for each use case.
The trusted and safe worst-case latency representations are analytically based as all corner cases can be covered through the design stage. One of the analytical methods in real-time systems is the Network Calculus (NC) [12], which is preferred over other approaches as it produces less pessimistic latency bounds [13], [14]. For this reason, the NC approach is adopted in this article to formulate the presented model.
In this paper, the worst-case end-to-end latency forms are implemented for AVB-X (X ∈ {A, B}) traffic under overlapping-based TT windows, with non-preemption and preemption modes. All worst cases for AVB-X transmissions are assumed to build related WCD forms as a function of the overlapping ratio (OR) between TT windows. The overlapping may occur from one or two sides for the TT window. Accordingly, the presented forms are evaluated under porosity (one-sided overlapping) and back-to-back (two-sides overlapping) configurations. These evaluations (AVB latencies under variable OR) with those in [8] (TT latencies under variable OR) can be considered as a complete helpful guide for TSN designers to implement more appropriate and trusted GCL schedules that guarantee worstcase latency deadlines for hard/soft real-time traffic. Thus, the main contributions of this paper can be summarized as: • A worst-case AVB latency under overlapping-based TT windows (AVB-OBTTW) algorithm is proposed. The GCL schedules are mathematically represented in each selected node under adjustable OR between different priority TT windows in the hyper-period.
• Using Network Calculus (NC) approach, the worst-case end-to-end latency bounds for AVB-X traffic are formulated with non-preemption and preemption modes. A realistic vehicular use case is used to evaluate the AVB-OBTTW algorithm under back-to-back and porosity configurations with light and heavy loading conditions.
• A comparison between non-preemption and preemption modes is provided under both loading scenarios.
• The AVB-OBTTW model reduces AVB-WCD bounds and increases the unscheduled bandwidth compared to the previous works. The lowest WCDs are obtained with the maximum allowable OR that raises unscheduled bandwidth to the highest without missing TT deadlines. The remainder of the article is structured as follows. Related works are discussed in Section II. Section III introduces a relevant background on the TAS mechanism, preemption modes and the CBS technique. The AVB-OBTTW system model with related design decisions is described in Section IV. Section V presents the impact of non-overlapped and overlapped TT open windows. The worst-case end-toend latency analysis for AVB-X traffic using AVB-OBTTW algorithm and related performance evaluations are presented with critical discussions in Sections VI and VII, respectively. Finally, the paper is concluded in Section VIII.

II. RELATED WORKS
In this section, relevant scheduling research studies are discussed with related considerations regarding unscheduled critical time traffic, addressing some critical ideas that have been proposed to support unscheduled flows. Further, several analytical models that evaluate AVB performance are discussed here, considering their limitations and objectives.
Finally, this section focuses on the necessity to evaluate AVB performance under more flexible scheduling algorithms targeting more suitable GCL patterns for each use case.
Craciunas et al. [15] implemented main scheduling constraints that guarantee deterministic performance for scheduled traffic using pre-defined GCLs under complete timing isolation between scheduled and unscheduled transmissions. Based on [15], several scheduling amendments have been proposed. For example, an enhanced TAS (eTAS) was recently presented in [16] to support nonperiodic or unexpected critical time flows, such as alarms or emergency events. Although the OMNeT++ simulation results guaranteed emergency traffic (ET) deadlines with less impact on scheduled traffic (ST) performance, unscheduled critical time traffic was not considered. Moreover, several researchers argued that implementing GCLs with strict timing constraints to protect TT flows unrestrainedly may result in missing AVB requirements. The related timing constraints include the TT window duration, guard band, offsets, and isolation from other TT windows. Thus, more flexible GCL designs are recommended while ensuring TT demands.
Several scheduling solutions have been proposed to support unscheduled critical time flows. For example, Nasrallah et al. [17] proposed adjustable window durations for scheduled and unscheduled transmissions based on related latency deadlines and the network loading conditions. The simulation results proved that queuing delays for the associated traffic type decrease with increasing corresponding window lengths and vice versa. However, the presented algorithm considered only two priority queues at the node output port without differentiating unscheduled queues, which should be served under different transmission constraints to meet reasonable latency deadlines. Gavrilut et al. [18], [19] proposed to include AVB in the scheduling algorithm with TT traffic using the greedy randomized adaptive search procedure (GRASP). The results ensure feasible scheduling for AVB flows under low-scale network topology. However, AVB schedulability was not guaranteed under more complicated networking and loading scenarios. In [20], simulation-based end-to-end latency reduction has been obtained by using data compression methods for all TSN traffic types. Nevertheless, all mentioned algorithms were implemented using simulation methods, which fail to cover all corner cases and lead to untrusted performance evaluations.
The TAS mechanism requires a guard band to isolate scheduled and unscheduled transmissions, resulting in waste bandwidth [21]. Different guard band limitations are considered in TSN depending on the preemption mode used, as described in Section III-B. The preemption mechanism is well-defined in IEEE 802.1Qbu [22], allowing some higher priority traffic (express type) to interrupt the transmission of lower priority traffic (preemptable type). However, IEEE 802.1Qbu does not specify which priority queues are set as express and preemptable. Most TSN researchers consider TT queues as express and the others as preemptable.
But Ashjaei et al. [23] proposed different assumptions for express and preemptable assignments to evaluate the frame response time. As expected, the response time was improved for express-based queues as long as other lower priority queues were still preemptable. In [24], Guo et al. confirmed that the preemption technique could reduce latency and enhance network resource utilization. Recently, Li et al. [25] discussed the importance of bandwidth allocation in CBS under TT impact by deriving the shaping and service curves to calculate delay boundaries and backlogs of traffic, leading to an optimized bandwidth allocation. The authors confirmed that reserving more bandwidth for stream reservation traffic may not reduce latency performance, especially under heavy load conditions. Accordingly, bandwidth assignment in CBS should be granted to ensure determinism for traffic load.
Several researchers derived analytical forms to evaluate end-to-end AVB latency based on the AVB-Ethernet protocol, such as [26], [27]. However, the TSN amendments were not considered, i.e., the analysis was done without considering TT effects. In [9]- [11], other analytical models for AVB latency have been proposed with considering TT impact in the TSN system. Zhao et al [9] formulated the worstcase AVB end-to-end latency with both allocation modes; non-preemption and preemption. The authors confirmed a non-overflow condition for the AVB credit. However, more analytical corrections were proposed in [10], [11] based on [9], resulting in less pessimistic AVB boundaries. In [10], Ren et al. obtained a slight reduction in AVB latencies under a more precise TT arrival curve compared to [9]. Moreover, Zhao et al. [9] implemented the AVB arrival curve as a sum of individual AVB-X arrivals that egress from the previous node without considering data speed and CBS shapers [11]. The same class flows can arrive simultaneously only at the first node (source). At non-first nodes, also the data link speed and the CBS shapers will control AVB-X arrivals. In [11], a more precise AVB arrival curve is considered to implement the worst-case latency for multiple AVB queues. The proposed analysis was implemented based on frozen and non-frozen AVB credits during guard band intervals. The evaluation results in [11] achieved a dramatic latency improvement for all AVB classes over that were obtained in [9].
Nevertheless, all the previous analytical models considering AVB latency performance have been implemented based on complete isolation between TT windows. Although allowing TT windows to overlap will influence the overall time-triggered latency performance, complete TT isolations result in considerable waste of bandwidth in guard bands and may lead to missing QoS requirements for unscheduled critical time traffic (AVB). Accordingly, allowing TT to overlap will undoubtedly improve the unscheduled traffic performance [6]. But strict constraints must be applied to bound the overlapping ratio that guarantees TT requirements, as discussed in [8], [7]. Shalghum et al. [7] proposed an analytical model for the worst-case end-to-end latency of TT traffic based on flexible window-overlapping scheduling (FWOS) algorithm, which was used to determine the maximum allowable overlapping ratio (OR) that guarantees TT latency deadlines. More latency-based optimizations for FWOS model were introduced in [8] as an optimized FWOS (OFWOS) algorithm, resulting in a complete view for the overlapping effect on the TT latency performance. As the TT overlapping was proposed to support soft real-time flows (AVB), a complete view for the worst-case AVB latency under these overlaps is essential to make critical tradeoffs with TT evaluations and implement the most appropriate GCL designs.

III. GENERAL BACKGROUND A. TIME-AWARE SHAPING (TAS) MECHANISM
The TSN structure consists of nodes (V ) connected by physical links. The nodes include end systems (ESs) (information sources and destinations) and switches (SW). Appropriate connections among targeted ESs are managed and established by the stream reservation protocol (SRP) introduced in IEEE 802.1Qcc [28]. The time-aware shaping (TAS) mechanism selects data between TSN nodes, as defined in IEEE 802.1Qbv protocol [2]. The incoming flows are differentiated into eight priority-based queues in each node, as shown in Fig. 1. One or more are assigned for TT flows, two for AVB-A and AVB-B flows, and the remainder for BE flows. Forwarding frames from these queues to the associated egress port is controlled by the predefined GCLs, which specifies the opening and closing events for each queue-gate with guaranteed synchronization between nodes. Only one priority queue is allowed to forward frames when its gate is open by the first-in-first-out (FIFO) technique. As depicted in Fig. 1, the TSN-aware switch provides the processes of switching fabric, filtering, and data selection for incoming data from IN h ingress ports, where h is the node order. Accordingly, between any nodes, h and h + 1, the associated delay includes the processing delay (D h proc ), queuing delay (D h queue ), selection delay (D h select ), and the propagation delay (D h,h+1 prop ).

B. PREEMPTION MODES
The TSN framework supports non-preemption and preemption modes with different degrees of protection for critical time traffic, as depicted in Fig. 2.

1) NON-PREEMPTION MECHANISM
In this mode, the unscheduled traffic (AVB and BE) cannot be interrupted if it is already in the transmission status. A guard band interval must be allocated in front of each TT window to protect associated transmissions, with a length equal to the maximum transmission unit (MTU) of 1500 bytes in the Ethernet protocol, as depicted in Figure 2(a). The associated gates for all preemptable queues (AVB and BE) are closed during these guard bands. Accordingly, the preemptable traffic will experience a dramatic delay, and the bandwidth consumption will be degraded.

2) PREEMPTION MECHANISM
The IEEE 802.1Qbu standard [12] introduces the preemptable frame to be interrupted when a TT window opens. The TSN standard defines two slightly different versions of preemptions; with HOLD/RELEASE and without HOLD/RELEASE.
• Preemption without HOLD and RELEASE: In this version, the preemptable frames can be transmitted at any time, i.e., there is no close gate for the preemptable queues. When a fragment of a preemptable frame starts to be transmitted before the opening edge of any TT window, the TT frame waits until finishing this fragment, and the remaining preemptable fragments will be resumed once the TT transmission has completed, as shown in Fig. 2(b). Thus, no guard bands are required before TT windows. However, an overhead (OH) is required to isolate and reassemble frames at the destination, with 24 bytes length as given by the standard. The TSN standard specifies fragments with up to 123 bytes that cannot be preempted by TT transmissions [22] (Annex R-3). Thus, these fragments will increase the experienced TT delay.
• Preemption with HOLD and RELEASE: In this version, all preemptable gates are open if at least a frame is in the associated queue. The Hold and Release feature allows the designer to implement a guard band before the TT window with a relatively more minor length than in the non-preemption mode, as depicted in Fig. 2(c). This guard band is used to preempt fragments, not the entire frame, with sizes larger than 124 bytes [22] (Annex R-3). Thus, this function ensures complete protection for TT frames compared to that without Hold and Release while simultaneously reducing their impact on the bandwidth available for preemptable traffic compared to non-preemption mode.

C. CREDIT-BASED SHAPING (CBS) MECHANISM
According to path-GCLs, AVB and BE flows share the available bandwidth using the credit-based shaping (CBS) technique in each selected node, as defined in IEEE 802.1Qav [4]. The CBS mechanism controls the AVB traffic forwarding to avoid starvation conditions for lower priority flows. The starvation means that even if an AVB frame has a higher priority than other AVB or BE frames, it cannot dominate the available bandwidth, and all of them will share it according to CBS limitations. The AVB gate selects the frame only if: the gate is open, the frame is allowed to be transmitted by the CBS constraints, and no higher priority AVB frame is being transmitted. The CBS parameters for AVB-X queue include the minimum credit limit required for transmission (cr min X ), maximum credit limit (cr max X ), sending slope (sdSl X ), and idle slope (idSl X ). As defined in the IEEE 802.1Qcc [28] and IEEE 802.1Qbv [2] standards, a frame can be forwarded only if the credit is higher than or equal to zero. If the node is selected in a transmission path, the AVB-X credit will be initiated to zero and then decreased with sdSl X during AVB-X transmissions, increased with idSl X when AVB-X is waiting to be transmitted and frozen during TT and guard band intervals. Furthermore, when the AVB-X credit is positive for an empty queue, the credit is updated as zero. But when the associated credit is negative for an empty queue, it is increased with idSl X until zero. The minimum and maximum credit limits for AVB queues can be mathematically derived as in Appendix A.

IV. SYSTEM MODEL AND DESIGN DECISIONS
There are two methods to combine scheduled and unscheduled transmissions in the TSN system, back-to-back, and porosity configurations, as shown in Fig. 3

(a) and (b).
It is noticed that more available bandwidth will be saved for unscheduled transmissions when TT windows overlap, especially with a back-to-back style where the overlapping can be implemented from both window sides. Only one guard band is required during the hyper-period. In [7] and [8], all overlapping situations are considered to express and evaluate the worst-case TT latency behavior with both configuration methods. Nevertheless, the worst-case end-to-end latency for AVB traffic under those overlapping situations is critically essential to provide critical optimizations and tradeoffs.
As the AVB transmissions are based on CBS, saving more intervals by minimizing the number of guard bands and overlapping TT windows will reduce the waiting time for the traffic in the associated queue. Fig. 4 shows an example for improving the credit variation for AVB queues under the overlapped TT window. Note that Fig. 4 is drawn assuming both AVB queues are empty when t < 0. Accordingly, it can be noticed that delay reduction will be obtained for AVB queues under overlapping-based TT windows compared to non-overlapping conditions. This article formulates the worst-case end-to-end latency for AVB traffic under the overlapping-based TT windows. The overlapping between any two adjacent TT windows k k+1 is the opening time of the (k+1)-th TT window, and NW h GCL represents the number of TT windows in the hyper-period (T h GCL ). For performance evaluation, we consider the overlapping ratio (OR) as a design parameter represented by the ratio between the overlapping interval (L h k−1,k ) and the overall duration of the left overlapped window  The main design stages of the worst-case AVB latency under overlapping-based TT windows (AVB-OBTTW) algorithm are illustrated in Fig. 5. In the first, the GCL is implemented for all selected nodes by assuming variable OR between TT windows. The implemented GCLs are used to calculate the TT arrival curves with related guard bands effect, and then, the lower bound AVB-X service curve is determined. The resulted service curve is used with the tight AVB-X arrival curve to compute the worst-case endto-end latency (WCD p(np) X ) for AVB-X traffic as a function of OR. All overlapping situation is considered to evaluate WCD performance. Thus, gathering the evaluation results in this work with those that consider worst-case latency performance for TT traffic under the same overlapping conditions in [8] (OFWOS algorithm) will introduce a complete view for the TSN designer to select an optimized OR for appropriate GCL implementations. The aforementioned design stages to build AVB-OBTTW are considered according to the following overall assumptions and limitations.
As the proposed model is implemented based on the TAS mechanism, all network elements are assumed to be fully synchronized without considering any synchronization errors between them. Further, the GCLs for end-to-end selected nodes are offline-based implemented without considering scheduling dynamism. Another significant TSN feature is the frame replication and elimination for reliability (FRER) to ensure zero congestion loss for time-triggered frames, as defined in IEEE 802.1CB. However, this function is not provided in this work. Moreover, the preemption with Hold and Release is assumed here to compare with the non-preemption mode under related guard band limitations, as described in Section III-B. During these guard bands, the credit for AVB queues is assumed to be frozen in the whole system analysis. Note that all the notations used in this article are summarized in Table 1.

V. TT IMPACT CONSIDERATIONS A. TT IMPACT BASED ON NON-OVERLAPPING WINDOWS
This section expresses the aggregate arrival curve in a node h for all TT traffic incoming from other nodes that are multiplexed in h. For an individual k-th queue (Q h TT k ) as shown in Fig. 6, the number of associated periodic windows to transmit τ TT k frames from an arbitrary node h to the transmission link during any arbitrary interval t = [s, t], where ∀s, t ∈ R + , t ≥ s and R + is the set of positive real numbers, is M h TT k (s, t), which can be represented as [29]  where T h TT k is the period of Q h TT k windows. By assuming where W h TT k is the duration of Q h TT k window, and C h represents the egressing bit rate from node h. Therefore, the arrival curve for Q h TT k is If the number of TT queues in node h is N h TT , the hyperperiod is equal to the least common multiple of periods for For a given scheduling timetable (GCL) for an egress port at node h, the number of all TT windows in T h GCL is By taking the i-th TT window (i ∈ NW h GCL ) as a reference with assuming s = 0, the aggregate TT arrival curve can be represented as where OD h j,i represents the offset difference between the i-th and j-th TT windows in T h GCL , i.e., OD h j, It can be noticed that the above expression of the aggregate arrival curve is formulated based on complete isolation between TT windows. This article considers the overlapping-based GCLs to implement end-to-end latency for AVB queues, including both the preemption and nonpreemption modes.

B. TT IMPACT BASED ON OVERLAPPING WINDOWS
This subsection starts with formulating GCL during the hyper-period by defining the opening and closing times for all TT windows with flexible overlapping between them. Then, the aggregate TT arrival curve is represented based on these GCLs for preemption and preemption modes.

1) GCL FORMULATION BASED ON OVERLAPPING TT WINDOWS
Each TSN-aware node has a GCL that specifies all queue gates' opening and closing events. In our model, the GCLs are represented as mathematical expressions for each node without considering the priority difference between TT queues, as they have the same impact on the AVB performance. Accordingly, we refer to each TT window by its order in the hyper-period, not by priority. By considering the i-th TT window as a reference, the associated opening and closing times can be given as, where OD h i,0 equals the interval from zero to the i-th opening time. Accordingly, the opening and closing times for other TT windows in the hyper-period can be determined as, where , and based on the offset difference, we can bound the overlapping ratio between k − 1 and k windows as, Based on the above expressions, the end-to-end GCLs are implemented, and then, the aggregate TT arrival curve is determined to calculate the worst-case AVB latencies.

2) AGGREGATE TT ARRIVAL CURVE WITH NON-PREEMPTION MODE
On a non-preemption basis, a guard band interval is established before each TT window to avoid any expected interference with unscheduled traffic transmissions. During these guard bands, the AVB transmissions are not allowed. The length of the non-preemption guard band for the j-th TT window (GB . Accordingly, the j-th TT window must be protected by where [x] + = x when x ≥ 0, and zero otherwise. Hence, we assume that the credit of both AVB queues is frozen during all G h,np j and W h TT j intervals. When the i-th TT window is considered as a reference, the aggregate arrival curve can be formulated as a sum of arrival curves produced from all TT windows and related guard bands in T h GCL , as follows, where L h j−1,j is the overlapping interval between j-th and (j + 1)-th TT windows, and it equals Hence, to clarify the effect of TT overlapping, L h j,j+1 is expressed in terms of OR between each pair of adjacent TT windows, as follows, where OR h j,j+1 represents the percent of the overlapping interval between j-th and (j + 1)-th TT windows concerning the whole duration of the j-th TT window. Moreover, as the guard band duration may vary with time depending on the length of AVB/BE frames that compete on the current GCL cycle, the distance between frozen intervals is adjusted by the term G h,np j − G h,np i . The expected overlapping interval is discarded from the frozen interval in each TT window using the term L h j,j+1 , which is calculated between the current TT window and the other allocated on the right, i.e., the closing-edge overlapping. The opening-edge overlapping interval is calculated as a closing-edge overlapping interval with the previous TT window from the left. Thus, all expected overlaps are considered in all NW h GCL TT windows. As examples, different offset situations for adjacent TT windows are presented in Fig. 7 with critical intervals' determinations under nonpreemption mode.

3) AGGREGATE TT ARRIVAL CURVE WITH PREEMPTION MODE
As classified in Section II-B, the TSN standard supports preemption with Hold/Release and without Hold/Release functions. In this article, the preemption with Hold/Release is considered to avoid degradation in the TT performance. Even a guard band must be used in the Hold/Release feature; it is a smaller length than the non-preemption mode. Thus, the protection interval for the j-th TT window will be limited as where GB h,p j equals the time required to transmit at most 124 bytes of AVB fragment. Under the worst transmission situation, there are no AVB transmissions during protection intervals, and the credit of both AVB queues will be frozen. Similar to the non-preemption case, the aggregate arrival curve can be formulated as a sum of arrival curves produced from all TT windows and related preemption guard bands in T h GCL with considering the i-th TT window as a reference, as follows With the preemption mode, an overhead interval (OH ) will be used in front of the remaining AVB fragments after each TT window. It is expected that the gap between the j-th and (j + 1)-th TT windows (GAP h j,j+1 ) is not larger than OH + GB h,p j+1 , i.e., GAP h j,j+1 ≤ OH + GB h,p j+1 . In this case, the OH interval will not be allocated between TT windows, and we assume frozen AVB credits during these gaps. Accordingly, the OH interval can be determined for the j-th TT window as follows, Accordingly, the expected gap between j-th and (j + 1)-th TT windows that cannot be used for AVB transmissions is determined as Fig. 8 shows different situations for the adjacent TT window offsets, determining related intervals under preemption mode. As shown in Fig. 8(a) and (b), it can be noticed that only one of OH h j and GAP h j,j+1 is larger than zero for each TT window, and both are located after the associated TT window. This means that ∀j ∈ NW h GCL when OH h j > 0, GAP h j,j+1 = 0, and vice versa. Hence, the AVB credits during OH h j and GAP h j,j+1 will be different. In OH h j , the credit increases for the associated AVB queue and decreases for the other. In GAP h j,j+1 , both AVB credits are frozen as no one can be served. Thus, after considering the i-th TT window as a reference, the arrival curve during OH h j interval can be given as But the arrival curve during GAP h j,j+1 intervals are combined with TT and protection intervals, as follows

VI. WORST-CASE LATENCY ANALYSIS FOR AVB TRAFFIC
In this section, the worst-case AVB-X end-to-end latency is formulated for both allocation modes, i.e., non-preemption and preemption. Firstly, the AVB-X service curve is expressed based on the aggregate TT arrival curves determined in the previous section. Then, the AVB-X arrival curve is determined by considering three different shapers. Finally, the worst-case AVB-X latency is found for nodes separately and then for the end-to-end path.
Assume that R h X (t) and R h * X (t) are the arrival and departure processes for class AVB-X traffic at node h. The arrival and departure events will be considered during any arbitrary interval t = [s, t], where ∀s, t ∈ R + , t ≥ s. Assume that all priority queues at the egress port of node h are empty at s, i.e., R h * X (s) = R h X (s), R h * TT (s) = R h TT (s) and R h * G np (s) = R h G np (s). Thus, the AVB credits are zero at s (cr h X (s) = 0). t can be expressed as t = t + X + t − X + t 0 concerning the status of AVB credit. t + represents the time intervals when the AVB credit is increasing, t − is when the AVB credit is decreasing, and t 0 is when the AVB credit is frozen. With non-preemption mode, t 0 represents all TT windows (W h TT j ) and protection intervals (G h,np j ), taking into account discarding the overlapping interval between adjacent windows from one of them. For each TT window, we discard the closing-edge overlapping interval with the adjacent TT window (L h j,j+1 ), which will be encountered with W h TT j+1 , i.e., The credit variation for Q h X during t can be given as, By considering t + X = t − t 0 − t − X in the above equation, we can obtain Under the worst-case, the egressed TT frames during t can be expressed as and the protection intervals during will result in wasted service, as follows As the service during t 0 TT and t 0 G np intervals after any time t is less than or equal the number of ingress frames during these intervals, i.e., Thus, the sum of frozen-credit intervals during t can be bounded as where α h,i G np +TT (t) is the aggregate arrival curve by the TT windows and the related non-preemption guard bands when the i-th TT window is taken as a reference, as expressed in (10). After using (21) and (25), the departure process of AVB frames during t can be bounded by Due to R h * X (t) is a wide-sense increasing function, it can be written as where sup means supremum (lowest upper bound). As the term R h * X (t) − R h * X (s) represents the service curve for AVB-X traffic during t, the lowest service curve for AVB-X traffic can be given under non-preemption mode as Then, where inf means infimum (largest lower bound), and the notation ⊗ represents the convolution of the min-plus operation.

2) WITH PREEMPTION MODE
Similar to the non-preemption mode, t can be represented as a summation of three different intervals depending on the AVB credit situation: frozen, increasing, decreasing credit intervals. The frozen credit intervals can be given as where t 0 G p represents the protection intervals in the preemption case, as limited in (13), and t 0 GAP represents possible unused gabs between TT windows, as bounded in (16). The AVB-X credit decreases during t − X = t − X ,OH + t − X ,sending , where t − X ,OH represents the overhead intervals as defined in (15) and t − X ,sending represents the AVB-X transmission intervals, which can be given as By assuming worst transmission cases as in the nonpreemption case, we can obtain the following After considering the i-th TT window as a reference, the frozen-credit and overhead intervals can be bounded by Thus, by considering (35), (36), and (31), the egressed frames from node h can be expressed as Finally, using the similar way as in the non-preemption case, the service curve for AVB-X traffic can be formulated during t with the preemption mode, by considering the i-th TT window as a reference, to be as It can be noticed that the service curve can be changed according to the reference TT window. Thus, to ensure worstcase evaluations for both preemption modes, the lowest service curve must be simultaneously obtained with considering all possible service curves during the hyper-period using the following where β h,np,i X (t) and β h,p,i X (t) are obtained using (28) and (38), respectively.

B. DETERMINATION OF UPPER-BOUND AVB-X ARRIVAL CURVE (α h,p(np) X (t))
At the first node (source), the arrival curve for the m-th AVB-X flow (τ X m ) can be given as where σ FN X m is the burst of τ X m (σ FN X m = f X m , where f X m represents the framing size of τ X m ), and ρ FN X m is the longterm rate of τ X m (ρ FN X m = f X m T X m , where T X m represents the period of τ X m ). If there are many AVB-X flows compete on transmission at the source, the aggregate arrival curve at the associated egress port will be formed as For the non-first node, three shapers limit the upper-bound arrival curve for AVB-X traffic from node h−1 to node h. The first shaper represents the sum of individual AVB-X arrival curves that egressed from h − 1 and sent to h [9], [10], i.e., As the above arrival shaper depends on the previous node servicing, it is defined based on the preemption (p) and nonpreemption (np) modes. The second shaper represents the constraint of egressing bit rate from h − 1. This means that the AVB-X group in h − 1 will arrive in h by a speed C h−1 . Thus, the arrival curve limited by the link speed can be given as [30], [11] VOLUME 10, 2022 where f [h−1,h],max X is the largest AVB-X frame transmitted from h − 1 to h. The third shaper represents the CBS limitation, as discussed in [11]. The arrival curve under this constraint is formed as [11] α Then, the aggregate arrival curve for AVB-X traffic that transfer from h − 1 to h will be bounded as Subsequently, the upper-bound arrival curve for τ X in each non-first node h can be found using the input arrival curve at the previous node h − 1, as follows represents the worst-case latency that τ X experiences to transfer from the X -th queue in X ,queue represents the worst-case that the Q h−1 X frame experiences from its arrival at the queue until the start of selection time from the egress port, as calculated using (48). Hence, the queuing delay represents the critical part of the overall delay and mainly depends on the GCL implementation, applied preemption mechanism, related CBS parameters, and traffic intensity in the associated node.

C. WORST-CASE END-TO-END LATENCY FOR AVB-X TRAFFIC
In the network calculus approach, the worst-case queueing delay in node h for AVB-X traffic equals the maximum horizontal distance between α h−1,p(np) X (t) and β Then, the worst-case end-to-end latency for AVB-X traffic can be calculated as a sum of the worst-case latencies experienced in the selected path, as where N represents the number of nodes in the selected path that the AVB-X traffic passed through.

VII. PERFORMANCE EVALUATION A. CASE STUDY AND EXPERIMENTAL SETUP
As the network calculus approach is used to formulate our model, the related GCLs are built using Java API of the Real-Time Calculus (RTC) toolbox [31], which is mainly implemented by min-plus and max-plus operators to represent real-time systems. Our evaluation programs are run using an Intel Core i7-3770 CPU computer at 3.40GHz and RAM 12 GB. For performance evaluations, a realistic vehicular use case is used as one of the TSN targeted applications, as shown in Fig. 9(a). The connected vehicle can be represented by a simple networking topology, as shown in Fig. 9(b) [5], [8]. The associated end systems could be sensors, actuators, or cameras distributed in the vehicle to gather relevant driving information from the surrounding area.
It is assumed that the data rate equals 1 Gbps for all physical links in all experiments. Further, incoming TT flows are differentiated into eight different priority queues in each node. The predefined timetable controls open and close events for each queue-gate (GCL), implemented as described in Section IV-B. The opening-edge times for all TT windows in the hyper-period in node h (t h,o k ) are tabulated in Table 2. The closing-edge times in the GCL implementations are updated using (6) and V-B2 accordingly. It is assumed that the duration of the open window is 20µs with a 500µs period for all TT queues in each node.  OR between TT windows. Note that the symbol k represents both the TT priority class or the TT window order in the hyper-period as window durations and periods for all TT queues are the same. The WCD for AVB traffic is evaluated under one-sided and two-sided overlapping scenarios (porosity and back-to-back configurations) as illustrated in Table 2, using two different AVB loading cases (light and heavy loads) as described in Table 3.

B. NUMERICAL RESULTS AND DISCUSSION
This subsection presents our results under preemption mode in detail, and then, a brief comparison between the preemption and non-preemption modes is provided. Finally, our findings are compared with the previous related works.

1) AVB LATENCY EVALUATIONS UNDER LIGHT LOAD
For this case, 10 AVB flows (5 AVB-A and 5 AVB-B flows) are considered to select the routes in Fig. 9(b), as specified and distributed in Table 3. Fig. 10(a) and (b) show the effect of TT overlapping on the WCD of AVB-A and B traffic, respectively, with back-toback configuration under multiple CBS settings. It is noticed that WCD for all AVB flows decreases dramatically as much as OR increases. In Fig. 10(a), the WCD performance for AVB-A flows is assessed under two different AVB-A idle slopes (idSl A = 0.35, 0.55) and fixed AVB-B idle slope (idSl B = 0.25). As expected, less WCDs has obtained with idSl A = 0.55 than with 0.35, as higher values of idSl A means offering more bandwidth for AVB-A transmissions. For each CBS setting, AVB-A flows experience slightly different WCDs depending on their selected paths and their sizes. For example, AVB 1 and 5 experience lower WCDs than others as they do not have competitions with other AVB-A flows on Link 6 and Link 3, respectively. However, AVB 1 experiences a slightly lower WCD than AVB 5 as it has a smaller framing size, i.e., f h,max AVB1 = 500 bytes and f h,max AVB5 = 800 bytes. Further, AVB 3 and AVB 4 experience the highest WCDs as they compete with each other on the same path (ES2-SW1-SW2-ES4). As AVB 3 has a bigger size (1200 bytes) than AVB 4 (750 bytes), it experiences higher selection delays in the associated nodes, leading to higher end-to-end latency. In Fig. 10(b), the WCD behavior for AVB-B flows is examined under two different AVB-B idle slopes (idSl B = 0.20, 0.30) and fixed AVB-A idle slope (idSl A = 0.50). As observed from the figure, lower latencies have been obtained using idSl B = 0.30 than 0.20. Moreover, frames with different sizes experience unequal WCDs even if they share the same path. For example, AVB 7 and 8 share the path of ES2-SW1-SW2-ES4, but their WCDs are slightly VOLUME 10, 2022 different. The reason is that their sizes are different (400 and 850 bytes), resulting in various selection delays, as formulated in VI-C. Further, AVB 6 has the lowest WCD as there is no other AVB-B flows compete on Link 1 and Link 7. The highest WCD has been experienced by AVB 9 and 10 as they compete on the same path and their sizes together are 1550 bytes, which is larger than all AVB-B loads on other links. AVB 10 has a larger size (1000 bytes) than AVB 9 (550 bytes), it experiences higher end-to-end latency.
With porosity configuration, the WCD performance with respect to OR is evaluated for AVB-A and AVB-B flows with multiple CBS settings in Fig. 11(a) and (b), respectively. Compared to the back-to-back configuration, WCD for all AVB flows with the porosity style decreases by lower percentages with the increase in OR. Furthermore, the WCD behavior differs from one flow to another as the AVB transmissions are interrupted by multiple TT windows, differing from that in the back-to-back style where the AVB transmissions are allocated at one blank interval in the hyper-period. It can be noticed that the WCD performance with the porosity configuration for all AVB flows is better than that with back-toback during lower OR percentages under light loading case (10 AVB flows). However, the back-to-back design gives lower latencies than the porosity style under a higher degree of overlapping, as shown in Fig. 10 and 11. As a result, the TT overlapping with back-to-back configuration gives more WCD reduction than that with porosity. For example, under 10% and 30% of TT overlapping with the back-to-back, the WCD for AVB-A is reduced by 6.57% and 19.72% on average, and by 5.59% and 16.77% on average for AVB-B traffic. However, with porosity style, the AVB-A WCD is reduced by 2.50% and 11.43% on average, and by 2.40% and 12.52% for AVB-B traffic under 10% and 30%, respectively. Compared to the AVB-A flows, AVB 1 experiences the lowest WCD and AVB 3 has the highest. Moreover, AVB 6 still obtains the lowest WCD and AVB 10 is the highest related to the AVB-B flows for the same reasons as justified in the back-to-back case.

2) AVB LATENCY EVALUATIONS UNDER HEAVY LOAD
For this case, 30 AVB flows (15 AVB-A and 15 AVB-B) are assumed to share the links in Fig. 9(b), as specified in Table 3. Fig. 12(a) and (b) show the impact of OR on WCD with the back-to-back style for AVB-A and AVB-B flows, respectively, and Fig. 13(a) and (b) with the porosity configuration. For these evaluations, we use the same idle slopes for AVB-A and AVB-B as those in the light loading case (10 AVB flows). Fig. 12(a) and (b) prove that the effect of the heavy load situation on the latency performance is higher if compared with the light load case in Fig. 10(a) and (b). However, the WCD behavior is still almost the same as that under light load conditions, i.e., the OR increase reduces WCDs for all AVB flows by the same percentages with the associated idle slopes. Moreover, there are some AVB flows experience the same WCD as they share the same end-to-end networking path and have equal sizes of frames, such as AVB 1, 11, and 21. A slight change in WCDs for some AVB-A flows compared with each other. For example, AVB 1 experiences lower WCDs than AVB 5 in the light load case, as shown in Fig. 10(a), but under heavy loading scenarios, AVB 5 experiences the lowest, as shown in Fig. 12(a). This change is expected as these flows cross different paths with different loads and have unequal framing sizes. Fig. 13(a) and (b) show that as observed under the light load case, the WCD reduction with the OR increase in porosity style is lower than back-to-back. For example, under 10% OR and 30% OR with back-to-back, the WCD for AVB-A is reduced by 5.93% and 17.79% on average, and by 4.14% and 12.40% for AVB-B traffic. However, with porosity style, the AVB-A WCD is reduced by 5.15% and 11.48% on average, and by 3.70% and 11.54% for AVB-B traffic under 10% OR and 30% OR, respectively. Furthermore, after comparing the results in Fig. 13(a) and (b) with those in Fig. 12(a) and (b), it can be noticed that the WCD performance with the backto-back is better than porosity in most overlapping situations. This happens because there is larger waste bandwidth in the porosity style, as more guard bands and overhead intervals are required. The effect of these unoccupied intervals appears in heavy load situations because most blank intervals will be used for AVB transmissions.
A slight fluctuation can happen between any WCD lines close to each other, depending on the related frame sizes, idSl A , idSl B , and selected path. For example, the WCD behavior for AVB 1,11,21 and AVB 5,15,25 frames fluctuate with OR. Although the size of AVB 1,11,21 frames (500 bytes) is smaller than AVB 5,15,25 (800 bytes) and both are class A, they experience a slightly higher WCD when 30% ≤ OR ≤ 70%, as shown in Fig. 13(a). This fluctuation happens because the frames share different paths, as listed in Table 3. AVB 1,11,21 frames share ES1-SW1-SW2-ES5 and AVB 5,15,25 share ES3-SW1-SW2-ES6. As formulated in VII), total WCD is calculated by aggregating all WCDs experienced through the selected links, and the upper-bound arrivals in each node depend on the WCD experienced in the VOLUME 10, 2022 previous node. Varying OR with the same value in all nodes will affect all experienced WCDs through the selected links by different percentages, leading to fluctuated WCDs.

3) COMPARISON BETWEEN PREEMPTION AND NON-PREEMPTION MODES WITH AVB LATENCY
In this subsection, a critical comparison between preemption and non-preemption modes is presented concerning WCD for AVB traffic. For a meaningful comparison, the presented results for preemption mode in the previous subsection are summarized with the non-preemption for the back-to-back and porosity configurations in Fig. 14(a) and (b), respectively. In both figures, the non-preemption results are presented by solid lines and the preemption by markers, keeping the same color for both modes under the same configuration and loading condition. With assuming larger bandwidth for AVB-A transmissions than AVB-B in related idle slopes, the average WCD for AVB-A traffic is always lower than AVB-B traffic.
As expected, the preemption mode gives less WCDs on average compared to the non-preemption for all configurations, loading conditions, and overlapping ratios (ORs). Moreover, the porosity style results in lower WCDs on average than back-to-back, as shown in Fig. 14(a) and (b). For example, with 20% OR and using back-to-back configuration, the preemption mode minimizes the WCD for AVB-A traffic by 5.48% and 4.91% on average, and by 4.35% and 4.04% for AVB-B traffic under light and heavy loading cases, respectively, compared to the non-preemption. However, using the same OR with the porosity style, the preemption mode is superior to non-preemption by 25.91% and 19.94% WCD reduction on average for AVB-A traffic, and 23.76% and 15.89% for AVB-B traffic under light and heavy loading cases, respectively. The observed advantage in WCD performance using preemption mode returns to saving more bandwidth with smaller sizes of guard bands compared to non-preemption. As described in Section III-B, the preemption guard band is only 123 bytes, but the non-preemption requires 1500 bytes to protect TT windows from unscheduled transmissions. As the proposed model adopts the preemption with the Hold and Release feature, there is no negative impact will happen on the TT latency performance.

4) AVB LATENCY COMPARISON WITH THE PREVIOUS WORKS
This subsection compares our results with the previous related works in [9] (AVB-TSN18) and [11] (AVB-TSN21). The comparison is provided using preemption mode for both configurations (back-to-back and porosity) under assumptions of light and heavy load (10 AVB and 30 AVB flows). The proposed model in [11] improved the associated analysis by implementing more tight arrival curves for AVB traffic than that presented in [9]. In [9], the authors formulated the AVB arrival curve based on the aggregate arrivals from all individual nodes connected to the associated node's ingress port. However, Zhao et al. [11] added two other arrival shapers that control the AVB arrivals, i.e., link speed and CBS shapers. In AVB-OBTTW, all three arrival shapers are considered, as presented in Section V-B. We select three overlapping ratios to compare with the previous works, i.e., 10%, 20%, and 30%. It is noticed that AVB-OBTTW reduces WCD for all AVB flows compared to AVB-TSN18 and AVB-TSN21. The WCD reduction with back-to-back style is rather than that with porosity. For light load case as shown in Fig. 15(a) and (b), AVB-OBTTW reduces WCD with backto-back by 8.28%, 14.17%, and 20.07% compared to AVB-TSN21, and by 5.94%, 10.55%, and 14.85% with porosity under 10% OR, 20% OR, and 30% OR, respectively. Furthermore, by considering the selection delay in our WCD calculations, as presented in VI-C, it is worthy to know that AVB-OBTTW obtains more accurate results than other models, which gave equally WCDs for AVB flows that share the same path. For example, the AVB 3 and 4 experience the same WCD in AVB-TSN18 and AVB-TSN21 models, while they have different sizes of frames. In contrast, AVB-OBTTW obtains WCDs proportional to the sizes of frames, resulting in different WCDs for flows that share the same path and have unequal frame sizes, such as AVB 3 and 4, AVB 7 and 8, and AVB 9 and 10. For example, as shown in Fig. 15(a) and (b), AVB 3 (with the size of 1200 bytes) experiences a larger WCD than AVB 4 (with the size of 750 bytes). For the heavy load case as shown in Fig. 16(a) and (b), it can be observed that using the AVB-TSN18 model produces more pessimistic WCDs than AVB-TSN21 as it is based on the aggregate arrivals for individual AVB flows. Accordingly, other arrival shapers, i.e., link speed and CBS shapers, have a considerable impact on WCD boundaries, as applied on AVB-TSN21 and AVB-OBTTW models. Further improvements have been achieved using AVB-OBTTW over AVB-TSN21 under different OR percentages. For example, AVB-OBTTW reduces WCD with back-to-back by 6.64%, 11.39%, and 16.15% compared to AVB-TSN21, and by 5.73%, 8.97%, and 13.28% with porosity under 10% OR, 20% OR, and 30% OR, respectively. Thus, WCD boundaries for AVB flows can be reduced by a percentage depending on the maximum allowable OR between TT windows, which can be calculated FIGURE 16. Comparison between WCD performances using preemption mode for three different models; our model (AVB-OBTTW), AVB-TSN18 [9], and AVB-TSN21 [11] by assuming 30 AVB flows compete on the links with; (a) back-to-back configuration. (b) porosity configuration. using the OFWOS model in [8]. The maximum allowable OR is the highest overlapping ratio between TT windows that guarantees end-to-end latency deadlines for the associated TT queue.

5) BW COMPARISON WITH THE PREVIOUS WORKS
Allowing TT windows to overlap contributes to minimizing the bandwidth waste in each link. When the overlapping occurs, there is no need to assign a guard band between the overlapped windows against unscheduled transmissions. Accordingly, the bandwidth waste will be reduced, especially with the back-to-back configuration. The unallocated guard bands and the overlapping intervals lead to saving more BW for unscheduled traffic. The highest unscheduled BW that can be saved is obtained with the maximum allowable OR. FIGURE 17. Comparison between unscheduled BW percentages using AVB-OBTTW, AVB-TSN18 [9], and AVB-TSN21 [11] with both preemption modes, under back-to-back and porosity configurations. Fig. 17 shows the available unscheduled BW using AVB-TSN18, AVB-TSN21, and AVB-OBTTW models under 10%, 20%, and 30% ORs. The comparison is achieved with the preemption and non-preemption modes under back-to-back and porosity styles. As AVB-TSN18 and AVB-TSN21 did not consider the TT overlapping, just AVB-OBTTW is assessed under multiple ORs. Further, the AVB-TSN18 and AVB-TSN21 give the same BW assignment with no difference between their GCL implementations. For the same configuration, the preemption mode offers a higher unscheduled BW than non-preemption in all models, as it requires a smaller size guard band. It can be noticed that AVB-OBTTW saves more unscheduled BW than others for all cases. The BW improvement increases when OR increases. For example, under porosity and preemption mode, 10%, 20%, and 30% ORs in AVB-OBTTW offer unscheduled BW 70.26%, 73.46%, and 76.66%, respectively, where only 66.12% is available using AVB-TSN18 and AVB-TSN21. At the same OR, the highest BW can be obtained is under back-toback with preemption. Compared to the previous models, the highest BW enhancement is obtained with the nonpreemption mode. For example, the non-preemption mode with 10% OR increases the unscheduled BW by 20% and 12.8% under back-to-back and porosity modes, respectively. But for preemption mode with the same OR, the BW percentage is enhanced by 4.84% and 4.14% under back-to-back and porosity, respectively.

VIII. CONCLUSION
The TSN framework is introduced as an appropriate environment for mixed-criticality applications, such as automotive and automated industries. In TSN, a time-aware shaping mechanism schedules TT streams through a window-based transmission system. Complete isolation between TT windows minimizes the available bandwidth for unscheduled traffic (AVB and BE). Although several scheduling algorithms have been implemented based on overlapped TT windows, none evaluated the worst-case AVB performance under these overlaps.
In this article, a worst-case AVB latency under overlapping-based TT windows (AVB-OBTTW) algorithm is proposed. Complete WCD forms are presented separately with preemption and non-preemption modes. These models are evaluated using a realistic vehicular scenario by assuming back-to-back and porosity configurations in the hyper-period under light and heavy loading conditions. The numerical results confirm WCD reduction with increasing OR for all experimental settings. As higher bandwidth is given for AVB-A traffic than AVB-B, AVB-A flows experience less WCDs than AVB-B traffic that share the same path. The overlapping effect in the back-to-back style is higher than that in porosity. On average, the porosity style gives less WCDs under a light loading case, but as much as the load increases, the back-to-back becomes better than the porosity. Furthermore, the preemption mode gives less WCDs than non-preemption under all experimental settings, especially using the porosity style. For example, with 20% OR, the preemption mode reduces WCD compared to non-preemption by 4.92% and 4.48% on average using back-to-back type, and by 24.84% and 17.92% using porosity under light and heavy loading scenarios, respectively. Compared to the latest related work, our model obtains less WCDs, especially under large ORs. For example, with 10% and 30% ORs, AVB-OBTTW minimizes WCD using back-to-back style by 8.28% and 20.07% on average and using porosity by 5.94% and 14.85%, respectively. Thus, combining our findings with that for TT traffic will be a helpful guide for vehicular designers to implement suitable GCL patterns that can effectively serve incoming data from interconnected devices, such as car sensors, actuators, and cameras.
Another significant TSN feature is the frame replication and elimination for reliability (FRER) to ensure zero congestion loss for time-triggered frames, as defined in IEEE 802.1CB. This function is not provided for the selected nodes in this work. The FRER feature will increase the ingress frames in each node, leading to higher WCDs for all critical time traffic. An exciting research point for future work is to apply other real-time networking topologies considering this feature in all selected nodes.

APPENDIX A MINIMUM AND MAXIMUM CREDIT LIMITS FOR AVB QUEUES A. AVB-A CREDIT BOUNDING
According to the transmission situation, the AVB-X credit will increase by a slope idSl X or decrease by a slope sdSl X between the minimum and maximum bounds, as cr min X ≤ cr X (t) ≤ cr max X AVB-A frame can be transmitted if: it is in the associated queue, its credit is greater than zero (cr A (t) > 0), and there is no lower priority frame being in transmission. As shown in Fig. 18, The minimum bound of AVB-A credit (cr min A ) can be reached after transmitting the largest size of AVB-A frame when the associated credit at the beginning of transmission is greater than zero by a minimal positive value (cr A (t) = + , where + → 0 + ), as given by [32] cr min The maximum bound of AVB-A credit (cr max A ) is reached when the associated frame encounters the worst transmission situation, as shown in Fig. 18. This case can happen when the associated credit is less than zero by a very small value (cr A (t) = − , where − → 0 − ), and there is a lower priority frame (AVB-B or BE) that starts the transmission. AVB-A frame will wait t max n ( t max A = f max n C), where f max n = max(f max B , f max BE ). Therefore, the AVB-A credit will be bounded as [32] sdSl A f max

B. AVB-B CREDIT BOUNDING
In similar with AVB-A case, the minimum bound of AVB-B credit can be given as [32] cr min As shown in Fig. 19, the maximum AVB-B credit is reached if; the associated frame arrives in the queue and its credit is less than zero; there is a BE frame that starts the transmission when cr B (t) = − (where − → 0 − ), and before completing BE transmission the largest AVB-A frame arrives the associated queue with the maximum credit bound. Thus, from the moment when cr B (t) = − , the AVB-B frame will wait t max BE   Then, the maximum bound of AVB-B credit can be represented as [32] cr max

APPENDIX B NETWORK CALCULUS BASIS FOR WORST-CASE LATENCY CALCULATION
The network calculus (NC) approach is commonly used in networking communications to formulate critical time flow transmissions and evaluate related QoS parameters, such as network utilization and worst-case latency boundaries. This article uses NC to assess worst-case latency for AVB traffic in TSN. To fulfill that, the arrival and service curves must be determined to express traffic intensity and servicing availability in each selected node. These curves are mathematically formulated using min-plus operations. The arrival curve (α (t)) can be found by specifying the flow arrival process (R (t)), which counts the bits enter a node until t, as [33] ∀λ < t where inf denotes infimum (highest lower bound) and ⊗ is the min-plus convolution. The typical arrival curve is the leaky bucket model, which is given by [26] α σ,ρ (t) = ρt + σ, t ≥ 0 where σ is the largest flow burst and ρ is the maximum limit of the flow's long-term average rate. The service curve (β (t)) can be found by specifying the data departure process (R * (t)), which counts the bits egress from a node until t, as [33] The typical service curve example is the rate-latency service curve, which is given by [26] β C, where C is the egress port servicing rate, T is the servicing latency, and [x] + is equal to x when x ≥ 0 and 0 otherwise. As shown in Fig. 20, the worst-case latency bound can be calculated in each node as a maximum horizontal distance (D max ) between α (t) and β (t), as [33]  non-preemption mode, t 0 = t 0 TT + t 0 G np , we assume all guard bands are used to send the largest-size AVB-X frames that start their transmissions. Thus, t 0 = t 0 TT , and R h * ,np TT where β h,np TT ( t) is the lower-bound service curve for TT traffic with non-preemption mode. Since R h * ,np X (t) cannot be a decreasing function and using (64), (63) can be formed as Hence, the upper-bound AVB-X arrivals happen when the TT service is assumed to be the lowest, which may differ depending on the referenced TT window. Thus, the lowest TT service curve can be found as β h,np where i refers to the reference TT window. β h,np,i TT ( t) can be determined by aggregating the services produced from all TT windows with the i-th reference window, as