Joint Scheduling and Routing Optimization for Deterministic Hybrid Traffic in Time-Sensitive Networks Using Constraint Programming

Real-time communications characterized by low-latency, deterministic, and reliable behavior are crucial for the advancement of emerging technologies. Consequently, Time-Sensitive Networking (TSN) has been developed to address the distinct demands of sectors such as automation and autonomous vehicles applications. This is currently achieved through various methods that emphasize the scheduling of critical data traffic. However, many of these methods determine routes independently, potentially impacting the schedulability of transmissions. Additionally, there is a noticeable lack of emphasis on the scheduling and routing of low-priority transmissions within TSN. In this paper, we introduce the Optimized Hybrid Deterministic Scheduling and Routing (OHDSR) approach. This method takes into account the priority of communications to jointly optimize the scheduling and routing of Time-Triggered (TT) communications, while also catering to low-priority Best-Effort (BE) communications. Extensive experimental evaluations show the high efficiency of our proposed method. It ensures not only the prompt delivery of TT communications but also the delivery of BE communications within suitable time frames, with a maximum difference of 14.29% from TT communications, meeting their respective deadlines. Moreover, the evaluation demonstrates the high scalability of the proposed approach, providing improved response times compared to the latest work for both routing and scheduling.


I. INTRODUCTION
The recent expansion in networking and its integration with new industries has led to the development of complex networks that are not adequately equipped for real-time communications.Network protocols must excel in scalability, cost, compatibility, safety, and real-time application handling.
The associate editor coordinating the review of this manuscript and approving it for publication was Bijoy Chand Chatterjee .
Although Ethernet is a well-established network protocol, it is not well-suited for real-time applications [1].In recent years, these features have become critical for modern networks and applications, including those used in automation industries and autonomous vehicles.To address these challenges, various Ethernet-based protocols have been proposed, including Time-Triggered (TT) Ethernet and Audio/Video Bridging (AVB).However, none of these later protocols fully meet the requirements of new real-time applications [2].
A new technology, Time-Sensitive Networking (TSN), was introduced, building on TT-Ethernet and AVB to offer enhanced real-time features for Ethernet networks [3].
Building on various extensions related to TT-Ethernet and AVB, the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) [4] initiated the IEEE 802.1 Time-Sensitive Networking (TSN) Task Group (TG).This group has established a set of standards that enhance Ethernet capabilities.These standards introduce new features that bolster the robustness and determinism of the networks.Additionally, they enable real-time synchronization and allow for effective management of different traffic types [5].The TSN framework is designed to handle hybrid traffic encompassing three types, each with distinct priorities for mixed-criticality applications [6].The highest priority class, which requires bounded latency and zero-jitter, is the TT traffic.This is articulated in IEEE 802.1Qbv [7].The mechanism employed to configure the TT traffic in the Gate Control List (GCL) schedule is known as Time-Aware Shaping (TAS).While TAS manages the highest-priority class, the Credit-Based Shaping (CBS) mechanism configures the subsequent highest traffic class: the AVB traffic.CBS was introduced in IEEE 802.1Qav [8].The final TSN traffic class is Best-Effort (BE).Since there is no need for a timing guarantee for this traffic type, it is regarded as the lowest priority class.
Scheduling transmissions with real-time guarantees is imperative to achieve minimal latency for different types of traffic.It's crucial to design appropriate Gate Control Lists (GCLs) for each Bridge (BR) to ensure the transmission of TT traffic, while also taking into account low-priority traffic, such as BE traffic.Thus, the contributions of this paper can be summarized as follows: • We introduce a model called optimized hybrid deterministic scheduling and routing (OHDSR).This model addresses the routing and scheduling of time-triggered traffic while also considering best-effort traffic based on their respective priorities; • We are unveiling an innovative bridge model that uses a Queue-Gating Time (QGT) approach.This method queues various traffic types and determines the specific times to open and closed the gates; • As part of the OHDSR model, we incorporate a priority stream constraint designed to regulate the gating mechanism at the egress port of every bridge.The structure of this paper is organized as follows: Section II introduces the TSN background and discusses related work.Section III defines the system models.In Section IV, we present our optimization framework, detailing the constraints and objective functions.The results of our evaluations are discussed in Section V.The paper wraps up with conclusions in Section VI.

II. BACKGROUND AND RELATED WORKS
Time-Sensitive Networking (TSN) is an emerging technology that has garnered considerable attention in the fields of computer networks and communications research over the past decade.While not as widely recognized as Ethernet technology, a clear overview of TSN is essential.In this section, we will delve into the background of TSN and survey relevant studies in the area.

A. TIME-SENSITIVE NETWORKING (TSN)
For many years, Ethernet has dominated as the primary technology in networking infrastructure.Its straightforward installation and robust performance have cemented it as one of IEEE SA's most successful standardizations.However, for a long time, Ethernet lacked support for realtime communication-a critical requisite in sectors such as autonomous vehicles, avionics, and industrial automation.This gap led to the emergence of various protocols tailored for specific applications: CAN and FlexRay for autonomous vehicles; Avionics Full-Duplex Switched Ethernet (AFDX) for avionics; and EtherCAT, PROFINET, and Sercos III for industrial contexts.While these protocols are adept at their designated functions, they often encounter compatibility issues when integrated with Ethernet-centric networks, and their adaptability across different industries is limited [9].Recognizing these challenges, the IEEE Time-Sensitive Networking (TSN) task group has been proactively standardizing a technology equipped for real-time communication in the recent past.
TSN enhances Ethernet by enabling deterministic communication, ensuring that high-priority traffic is delivered within a defined time frame.The journey towards TSN can be traced back to the IEEE 802.1 Audio Video Bridging (AVB) standards, which introduced several features later adopted by TSN.Initially, the Audio Video Bridging Task Group was part of the IEEE 802.1 working group.However, it was renamed to ''Time-Sensitive Networking Task Group'' in November 2012 [10].
TSN incorporates various features, some of which originated from AVB, such as traffic shaping and scheduling, time synchronization, network management, and stream reliability.Maintaining the Quality of Service (QoS) for time-sensitive applications is paramount given their synchronization demands and time constraints.As such, TSN ensures QoS by integrating standardized features designed for specific objectives [11].Numerous mechanisms have been introduced under different TSN standards: IEEE 802.1Qav [8] introduced the Credit-based Shaper (CBS), while IEEE 802.1Qbv [7] detailed the Time-Aware Shaper (TAS) and the Gate Control List (GCL).The significance of timing for real-time applications and synchronization across various network components was addressed in the IEEE 802.1AS standard [12] Another notable mechanism is frame preemption, which momentarily interrupts the transmission of low-priority traffic in favor of high-priority traffic.This was proposed in IEEE802.1Qbu[13].Though not the final standard devised by the TSN task group, IEEE802.1CB[14], which introduces the Frame Replication and Elimination for Reliability (FRER) mechanism, is worth mentioning.
The Time-Aware Shaper (TAS) schedules time-critical streams within Time-Triggered (TT) windows, often referred to as protected traffic windows or time-aware traffic windows [15].TAS traffic classes are predetermined based on the priority code point (PCP) values of the Virtual Local Area Network (VLAN) ID (VID) tag in 802.1Q frames [5].In TSN, hybrid communication is allocated for three distinct traffic classes: Time-Triggered (TT) traffic, Audio Video Bridging (AVB) traffic, and Best Effort (BE) traffic [16].The scheduling of these traffic types is governed by the Gate Control List (GCL), a list containing entries that dictate whether each port is open or closed and which type of traffic can be sent out through open-status ports [17].Both end-systems and bridges harnessing the capabilities of TSN allocate a series of queues to each egress port.The number of these queues varies, contingent on the specific features of each device.
To ensure all devices function harmoniously, they must be synchronized according to a global time standard, achieved using a reliable clock synchronization technique such as IEEE 802.1ASrev.Subsequent to this synchronization, gates are situated at the end of each queue, with their operation governed by the Gate Control List (GCL).The Time-Aware Shaper (TAS) utilizes the GCL to guarantee unobstructed access to the outgoing port for each TT stream.By adeptly integrating clock synchronization and TAS, traditional Ethernet networks are primed to support real-time applications requiring minimal latency and jitter [18].Consequently, crafting an accurate Gate Control List (GCL) for each specific scenario is vital to fully tap into the potential of TSN within a network.

B. RELATED WORKS
The Time-Sensitive Networking (TSN) standards, such as IEEE 802.1Qbv, define scheduling mechanisms.However, these standards do not specify any particular scheduling approaches.In this section, we will explore research studies related to TSN scheduling, with a special focus on those that address joint routing.According to a survey by [3], two main categories of algorithms have been introduced for TSN scheduling.The first category comprises Exact approaches, such as Integer Linear Programming (ILP), Satisfiability Modulo Theories (SMT), and Constraint Programming (CP).These methods can take considerable time but aim to find an optimal schedule with a specific objective.The second category, the Heuristic approach, provides a reasonably good solution more quickly than the Exact methods.Examples of these methods are the Greedy Randomized Adaptive Search Procedure (GRASP), Tabu Search, Simulated Annealing (SA), List Scheduling (LS), and Genetic Algorithms (GA).
Many researchers have investigated the joint routing and scheduling problem using various approaches.The ILP method was introduced in [19] to address this problem.In their evaluation, the researchers considered the effects of graph size, the number of streams, topology, and transmission frequency.Their optimization method's computation time was significantly influenced by the number of streams, whereas the network topology size had a lesser impact.Meanwhile, in [20], the authors reduced computational complexity and enhanced scalability in large-scale networks using an SMT-based approach.
Some researchers have employed heuristic algorithms in their joint routing and scheduling investigations.For instance, Wang et al. [21] utilized Ant-Colony Optimization (ACO), whereas another study [22] adopted Genetic Algorithms (GA).
The aspect of reliability was considered in [23], where the topology synthesis problem was formulated as an iterative path selection issue to minimize network costs.In another study [24] a CP-based approach was presented, introducing two Constraint Programming (CP) models: one featuring simple disjunctions, and the other incorporating optional interval variables which denote frame waiting times in queues.The methodology is as follows: first, a routing for the given streams is computed.A schedule is then determined based on this routing.If a schedule isn't identified, constraints are added to the routing model to prevent the previous routing solution.This process is repeated until a schedule is finalized.However, it is possible that all potential routings for some streams might result in scheduling conflicts.Considering both reliability and security, a CP model, along with a combined SA and LS model, was introduced in [25].This research accounted for task scheduling and multicast stream scheduling.
Other studies, like those by Schweissguth et al. [26], [27] and Yu et al. [28], have based their multicast stream scheduling on ILP.Similarly, Pahlevan et al. [18], [29] employed heuristic strategies in their scheduling, ensuring the multicast nature of the streams was addressed.
While most previous studies have concentrated on the scheduling and routing of TT traffic, some researchers have also considered other types of traffic, such as AVB and BE.Gavrilut et al. [30], [31], [32] have developed various heuristic algorithms to schedule TT streams, while simultaneously accounting for AVB traffic.Moreover, a machine learning approach based on deep reinforcement learning was introduced by Yang et al. [33].This approach takes into account TT traffic as well as both AVB and BE traffic.In our study, we are exploring a constraint programming (CP)-based approach to joint routing and scheduling.This approach considers factors like multicast capabilities and BE traffic.

III. SYSTEM MODEL
In this section, we will introduce our network, application, and bridge models.We will also explain the problem in detail.The notations used are provided in Table 1.

A. NETWORK MODEL
Our network model can be represented as a directed graph G =< V, E >, where V denotes nodes and E signifies edges.The nodes υ a ∈ V can be Bridges (BR), also known 142766 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.as network switches, or end-systems (ES).A connection between any two nodes, (υ a , υ b ), establishes a full-duplex and bi-directional edge or link.Within this framework, the sender end-system is termed as the ''talker,'' and the receiver end-system is labeled as the ''listener.''It is assumed that our entire network, including every node, is synchronized.Every node is equipped with complete TSN capabilities and supports the 802.1Qbv standard, leveraging the Time-Aware Shaper (TAS) to manage traffic.Each endsystem node, υ a , has a task scheduler governed by a periodic table.

B. APPLICATION MODEL
Our application model, δ k ∈ , is represented as a directed acyclic graph.It consists of a set of nodes that represent tasks τ j .Concurrently, a set of edges, E, denotes the data communication between these tasks.Each application is characterized by its own period δ k .The Hyperperiod (HP) is derived by determining the least common multiple (lcm) of all these periods, given by: HP = lcm({ δ k |δ k ∈ }).An application task τ j ∈ T can be defined by the tuple (es τ j , ω τ j , τ j ), where es τ j denotes the execution end-system, ω τ j represents the worst-case execution time (WCET), and τ j indicates its period.The set T contains all such tasks.The initial end-system sending the stream is termed a 'talker', while the receiver end-system assumes the role of a 'listener'.Upon completing source task execution, the talker (or sender endsystem) produces outgoing data streams, while the listener (or receiver end-system) must receive all incoming data streams before it can begin destination task execution.When tasks operate on the same end-system, their communication dependencies are typically managed through message queues or shared memory.However, for tasks on different end-systems, their communication requirements create dependencies represented by streams.In the TSN context, a stream denotes a communication need between a talker and one or more listeners, often referred to as unicast or multicast.We begin by examining multicast streams.A stream, denoted as σ i ∈ S originates from a source task labeled τ σ i talker and is directed to multiple destination tasks, T σ i listner .The set S encompasses all such streams.Each stream necessitates a designated path (or route) to ensure it reaches its intended destination within its period σ i .The Maximum Transmission Unit (MTU) represents the upper limit of the size of each stream, L σ i , and is predefined for the network.

C. BRIDGE MODEL
In this section, we describe the architecture of the bridges used within our network.A consistent design is employed across all bridges.Each bridge, denoted as br n and part of the set BR (which encompasses all bridges in the network), possesses eight distinct queues, represented as q br n φ and belonging to the set Q br n .To streamline our representation and given that all bridges share the same configuration, we will exclude the br n notation from the subsequent notations mentioned.Each queue is assigned a specific type, denoted by ρ q φ , indicating the nature of traffic it manages.When the type of a stream, identified as ρ σ i , matches the type associated with one or more queues, the stream is allocated to those queues based on current queue storage conditions.
For example, in our configuration, we allocate queues q 0 and q 1 specifically to manage TT traffic.Suppose q 0 currently handles three TT streams while q 1 manages two.The next incoming TT stream will be directed to q 1 .If, however, both queues are serving an equal number of streams, the subsequent TT stream will be assigned to the queue with the earlier order -in this case, q 0 .Each queue is paired with a specific gate, denoted as g q ∈ G q , responsible for regulating the outgoing data flow from the queue.These gates can assume one of two states: open or closed, determined by predefined Gate Control Lists (GCLs).Initially, all gates are set to the closed state.Every GCL has an associated gating time, represented as t ϕ ∈ , during which the gate state for each queue, symbolized as ψ g , alters according to the respective GCL entry.This GCL entry is represented by an eight-digit binary number: ones ( 1s) indicate an open gate, and zeros (0s) a closed one.The binary number is read from right to left: the far-right digit corresponds to the first queue q 0 , and it progresses sequentially to the eighth digit that represents the last queue, q 7 .
Queues are categorized as either Time-Triggered (TT) or Best-Effort (BE).Each type is tailored to handle specific traffic streams; TT queues manage TT streams, while BE queues handle BE streams.In the current setup, q 0 and q 1 are allocated for TT traffic, while q 6 and q 7 cater to BE traffic.The intermediate four queues from q 2 to q 5 remain unutilized and are tagged as '' N /U '' (Not Used).
This queue configuration remains unchanged across different scenarios.Consequently, there will be four gate state transitions.Initially, the TT queues open at gating time one, t 1 ∈ with a GCL entry of ''00000011''.All gates then closed at gating time two, t 2 , evidenced by a GCL entry of ''00000000''.The BE traffic gates activate at gating time three, t 3 , with a GCL entry of ''11000000''.All gates eventually closed at gating time four, t 4 , as indicated by a GCL entry of ''00000000''.
In some cases, BE traffic gates may open concurrently with the closure of the TT traffic gates.In such a scenario, the first gating time activates the TT traffic gates, the subsequent interval simultaneously deactivates TT traffic gates and activates the BE traffic gates, and the final interval deactivates all gates.The gating times, as defined by our GCLs, are influenced by the network applications' Hyperperiod to enhance the likelihood of optimal network scheduling.Fig. 1 visually presents this bridge design.We are modeling the Queue-Gating Time (QGT), represented as κ q ϕ ψ .This model's derivation stems from the gating time t ϕ ∈ associated with each bridge.
In this setup, the GCL entry ''00000011'' signifies that at t 1 = 0µs, the gates of q 0 and q 1 will be open (indicated by 1).In contrast, the gates from q 2 to q 7 will be closed (indicated by 0).As a specific example, when t 1 = 0µs, q 0 has an open status, expressed as ψ 0 = 1, denoting κ 0 1 1 = 0µs.Similarly, q 5 remains closed, expressed as ψ 5 = 0 at t 1 = 0µs, resulting in κ 5  1 0 = 0µs.The QGT values, represented as κ q ϕ ψ , are tabulated in Table 2.This table is structured as a matrix: rows illustrate the TABLE 2. A Queue-Gating Time (QGT) determines when each queue is open (indicated by a status of ''1'') or closed (indicated by a status of ''0'') at a particular gating time.
queues Q, and columns correspond to Gating Times .The matrix denotes that at a specific gating time, termed t ϕ , the value κ q ϕ ψ matches that particular gating time.Thus, κ q ϕ ψ equals t ϕ .and relates to the queue q φ with gate status ψ g .

D. PROBLEM DEFINITION
In our study, we focus on the problem of scheduling and routing.This problem can be outlined as follows: We explore a system based on Time-Sensitive Networking (TSN), consisting of an array of bridges.These bridges, in turn, connect to multiple end-systems.Each end-system houses one or more tasks.Upon completion of these tasks, data streams are generated; these streams then emanate from the talkers and initiate tasks once they reach the listeners.
Within this intricate network, every task and its corresponding streams -collectively termed an 'application'function in harmony.Synchronizing these applications is crucial for achieving network performance hallmarked by minimal latency and reasonable response time.This desired state can be reached through precise task scheduling within the end-systems.Moreover, it requires choosing the best route and schedule for data streams traversing the bridges, all the while upholding the network's integrity to avert any possible data frame losses.
For every link (υ a , υ b )in E, we assign a link speed of (υ a ,υ b ) = 10Mbps and a propagation delay set to zero.Each end-system engaged in the transmission or receipt of streams will execute a task or a series of tasks related to that particular stream.These tasks, together with their corresponding streams, should be integrated within a single application, either autonomously or in combination with other tasks and streams.
The set of applications is denoted by = {δ 1 , δ 2 , δ 3 }.Each application comprises one stream: δ 1 and δ 3 both contain two tasks, while δ 2 encompasses three tasks.Tasks from the same application reside at both the talker and listener end-systems, connected by a single stream.
Each stream in S = {σ 1 , σ 2 , σ 3 } has a specific size.These sizes, represented by L σ 1 , L σ 2 , and L σ 3 are 70Bytes, 80Bytes, and 35Bytes, respectively.The worst-case execution time (WCET) ω τ j for tasks in δ 1 and δ 2 is 30µs.In contrast, tasks within δ 3 have a ω τ j of 20µs.There are two distinct traffic types with varied priorities.Streams from δ 1 and δ 2 rely on a Time-Triggered (TT) protocol, while the stream from δ 3 operates on a Best-Effort (BE) basis, as delineated in Table 3.The periods assigned to tasks and streams align with the periods of their corresponding applications.For simplicity in this example, we set a consistent period and deadline of 1000µs across all applications.As such, the Hyperperiod (HP) for our applications is derived as HP = lcm {δ 1 , δ 2 , δ 3 } = lcm{1000, 1000, 1000} = 1000µs.

TABLE 3.
A Queue-Gating Time (QGT) determines when each queue is open (indicated by a status of ''1'') or closed (indicated by a status of ''0'') at a particular gating time.
Each of the four bridges is equipped with eight queues, each with a corresponding gate.The first two queues, q 0 and q 1 , are designated for TT traffic.In contrast, the last two queues, q 6 and q 7 , cater to BE traffic.The middle queues-q 2 , q 3 , q 4 , and q 5 -remain unused.
We have defined a series of gating times t ϕ , culminating in a final gating time of 1000µs.Specifically, the first is at 0µs, the second at 400µs, the third at 700µs, with the last precisely at 1000µs.This timing aligns with the period and deadline set for our applications.The gates g 0 and g 1 , associated with the first two queues, enter the ''open status'' phase ψ 0 = ψ 1 = 1 at 0µs and switch to the ''closed status'' phase ψ 0 = ψ 1 = 0 at 400µs.This interval is specifically for processing the TT traffic in these queues.
As detailed in Table 3, the gates g 6 and g 7 begin in the ''closed status'' ψ 6 = ψ 7 = 0 at 1000µs before transitioning to the ''open status'' ψ 6 = ψ 7 = 1 at 700µs.This ensures uninterrupted TT traffic transmission across the bridges without interference from BE traffic.The subsequent 1000µs period is reserved for BE traffic, ensuring the reliable delivery of TT traffic.
Based on the GCL entry presented in Table 4, the gates g 6 and g 7 maintain a ''closed status'' ψ 6 = ψ 7 = 0 for the initial 700µs.This ensures the seamless transmission of TT traffic across the bridges, free from interference from BE traffic.At 700µs, these gates transition to an ''open status'' ψ 6 = ψ 7 = 1, lasting until they return to a ''closed status'' at 1000µs.This period is reserved for BE traffic processing.Fig. 3 showcases the configuration of gates and queues, detailing time allocations for each traffic type.As specified in the GCL entry in Table 4, and combined with the QGT κ q ϕ ψ calculations detailed in Section III-C, the κ q ϕ ψ for our example is elaborated in Table 5. QGT indicates when each queue is either open (marked by a status of ''1'') or closed (marked by a status of ''0'') relative to a specific gating time.In the subsequent example and throughout this text, we use the microsecond (µs) and millisecond (ms) as our primary time units.Fig. 4 presents the schedule of our example via a Gantt chart.This visualization displays tasks and streams concurrently, with varied colors distinguishing the specific applications to which they correspond.Shaded blocks highlight the periods when distinct traffic types have transmission  permission across each bridge, influenced by the gate statuses.In this context, the Hyperperiod (HP) is 1000µs.The bottom four rows showcase end-systems where tasks commence and their corresponding streams arise.Tasks interlinked within the same application bear the same color.The remaining ten rows detail the links between device pairs, marking frame occurrences.
For instance, the stream, labeled as σ 1 and colored light blue, emerges first at the link between es 1 and br 2 .It initiates transmission at 30µs, aligning with the end time of task τ 1 , and culminates at 86µs.To discern the stream's duration, we divide its size (70Bytes) by the link speed (10Mbps), resulting in 56µs for σ 1 .It is pivotal to acknowledge that we determine durations of other streams similarly, factoring in the frame overhead within the stream size.
Rectangles shaded in halftone blue denote gate opening times for the TT queues, while those in halftone yellow symbolize gate opening times for BE queues.On these rectangles, the x-axis depicts gate operation timings, while the y-axis signifies the bridges serving as conduits for the traversing streams.

IV. OPTIMIZATION FRAMEWORK
The optimization problems similar to the one highlighted in this paper are classified as NP-hard due to their potential reduction to the Bin-Packing problem, as referenced in [19] and [25].When dealing with large input data sizes, these problems can become intractable.To find solutions, we employ Constraint Programming (CP) in our optimization framework.We have implemented an approach called ''Optimized Hybrid Deterministic Scheduling and Routing'' (OHDSR).The term ''hybrid'' refers to the combination of two types of traffic: TT and BE.This optimization framework efficiently finds robust solutions within a reasonable time frame.

A. TIME-SENSITIVE CONSTRAINTS
The aim of any optimization solver is to identify the most suitable solution from a range of possible solutions to a problem.This selection is determined by specific criteria defined by an objective function.The optimization process involves adhering to several constraints.These constraints serve as boundaries or conditions that describe the acceptable solutions within the optimization problem.In this section, we will discuss the constraints we implemented and categorize them into four sub-sections: routing constraints, priority scheduling constraints, stream scheduling constraints, and task scheduling constraints.

1) ROUTING CONSTRAINTS
To transmit a stream, denoted as σ i , from a source node (referred to as the talker) across a network -passing through various intermediate nodes (known as bridges) -and eventually reaching the destination nodes (termed listeners), certain routing constraints must be established.The constraints detailed in this section are influenced by the work presented in [25] and [34].
In order of find the route of each stream, the successor nodes need to be identified.The successor node of the node υ a along the path of the stream σ i , denoted as Z σ i υ a .This can be established through a reverse route, beginning the procedure at the receiver nodes and progressively constructing it as a tree structure.The possible values of Z σ i υ a is: For every stream in the network, Z σ i υ a will be calculated for each node within the network, taking into account all bridges and end-systems.In case of υ a is the talker, then Z σ i υ a = υ a .If υ a lies along the stream's path, then Z σ i υ a = υ b .Conversely, if υ a is not on the path of the stream, Z σ i υ a = nil.Referring to the exemplary instance in section III-D-I, and considering stream σ 1 , its path is described as: [es 1 −→ br 2 −→ br 3 −→ es 4 ] Consequently, Z σ 1 υ a will be es 1 if υ a = es 1 and if υ a = br 2 , while Z σ 1 υ a will equal br 2 if υ a = br 3 , and br 3 if υ a = es 4 .In the case where υ a is any of the nodes not on the path of the stream σ 1 (like es 2 , es 3 , br 1 , and br 4 ), Z σ i υ a = nil for all these node, Fig. 5 clarify the concept behind this approach.Additionally, the length of the path from υ a to the talker-node of stream σ i is denoted as ϒ σ i υ a , with the allowable range of values is defined as: {0 ≤ ϒ σ i υ a , ≤ |BR| + 1}.The routing constraints are outlined in equations 1 through 5, as follows: Equation ( 1) demonstrate that, for each stream, if the successor of node υ a is not equivalent to nil, indicating the presence of a successor for node υ a along the path of stream σ i , then the length of the path to node υ a should equal the length of the path to its successor node Z σ i υ a , increased by one.This constraint helps in avoiding cycles in the route of the stream.
Equation ( 2) illustrate that each node with a successor along the path of stream σ i must also have a predecessor, and vice versa, considering the reverse route used.This helps in preventing any loose ends.
∀σ i ∈ S, ∀v a ∈ es τ σ i listener , where τ σ i listener ∈ T σ i listener : Equation ( 3) states that every listener end-system of stream σ i must have a successor.
Equation ( 4) impose that for every stream σ i , the talkernode es τ σ i talker , identifies itself as the successor, with the respective path length being zero at the talker's position.∀σ i ∈ S, ∀v a ∈ ES except es τ σ i listener and es τ σ i talker where τ σ i listener ∈ T σ i listener : Equation (5) indicates that the successor node will be nil for any end-systems in the network that do not possess a listener-node task τ σ i listener or talker-node task τ σ i talker .It is important to note that this applies only to end-system nodes, excluding the bridge nodes.

2) PRIORITY SCHEDULING CONSTRAINTS
The priority scheduling constraint is applied to each bridge br n ∈ BR, which accommodates streams σ i ∈ S of various types ρ σ i , passing through it.As discussed in section III-D, each bridge is equipped with a set of eight queues q br n φ ∈ Q br n , each characterized by different types ρ q φ .Streams sharing the type with a specific queue will be allocated to that queue.
Every queue operates with two QGTs t q ϕ ψ , governing both the open (1) and closed (0) statues.These times are contingent upon the gating time br n ϕ ∈ T br n associated with each bridge, from which the respective open and closed times are calculated.
Equation ( 6) assigns the time domain for both the offset and the end-time of each stream at each bridge.The offset of stream σ i should be greater than or equal to the QGT when the status is 1, indicating ''open'' status.Conversely, the end-time of that stream must be less than or equal to the QGT when the status is 0, denoting ''closed'' status.This constraint will not apply to the offsets and the end-times of streams at endsystems es m ∈ ES, and at links (υ a , υ b ) ∈ E, as the queues and gates are exclusively incorporated in the design of the bridges.

3) STREAM SCHEDULING CONSTRAINTS
In this section, we will introduce the stream scheduling constraint.Adhering to this constraint can potentially facilitate the optimal solution within our optimization framework by finding the best achievable schedule.The constraints are as follows: The first constraint in the process of scheduling streams is determining the duration time for each stream σ i at each link (υ a , υ b ).Equation ( 7) depicts this by calculating the time as the division of the stream size (in Bytes) by the link speed (in Mbps), thereby yielding a duration measured in microseconds (µs).
After assigning the duration to each stream, we in (8) that the end time of each stream should be equal to the sum of that stream's offset and its duration.This constraint is applied to all streams σ i ∈ S across all links on their respective paths Equation ( 9) enforces the precise sequencing of any two links that are located along the route path R σ i of any stream σ i .
This involves incorporating the links υ a , υ b and υ b , υ c as parts of this route.Given that this stream progresses through the link υ a , υ b before advancing to υ b , υ c , the offset of that stream at the link υ b , υ c must be greater than or equal to the end time of the preceding link υ a , υ b .
Equation ( 10) helps to prevent any overlap between any two different streams σ 1 and σ 2 , which share a link (υ a , υ b ) within their respective routes R σ 1 and R σ 2 .This entails that the timeframe during which stream σ 1 utilizes the link should not coincide with that of stream σ 2 .Implementing this constraint ensures equitable resource allocation, thereby enhancing both the efficiency and predictability of the scheduling process.
To enhance the resilience of frame transmissions and prevent any delays or loss of frames, Equation ( 11) is introduced.At the egress port of each node υ b (which, in this case, should function as bridge br n ), it is notable that this bridge node is a part of the link (υ b , υ c ).The link (υ b , υ c ) is located on the routes R σ 1 and R σ 1 common to both streams σ 1 and σ 2 .Meanwhile, the link (υ a1 , υ b ) is part of the route R σ 1 , and the link (υ a2 , υ b ) is found on the route R σ 2 .The two frames arriving at the bridge node υ b came from different sources.The first frame arrives from node υ a1 in the stream σ 1 via the link (υ a1 , υ b ), while the second frame arrives from node υ a2 in the other stream σ 2 via the link (υ a2 , υ b ).Moreover, the frames arriving at the ingress port of node υ b should not overlap within the time domain.This principle of frame isolation, originally suggested by Craciunas et al. [35], has become a widely accepted practice in configuring TSN networks.

4) TASK SCHEDULING CONSTRAINTS
Tasks are vital components of applications and are linked to various streams within those applications.Establishing an optimal schedule for these tasks is important.To achieve this, the following constraints should be fulfilled: Equation ( 12) determines the end-time of each task τ j at the node υ a where this task is executed.Each task possesses a worst-case execution time ω τ j , representing the duration required for the task to complete.The end-time of a task is calculated by adding its execution time to its offset time.
As we highlighted before, certain tasks and streams within the same application are interconnected.The coordination between these interconnected tasks and streams needs to be carefully planned with regard to their timing sequences.
As depicted in Equation ( 13), the tasks taking place at the talker nodes need to be initiated and completed before the beginning of the corresponding outgoing stream.This implies that the offset of stream σ i , originating from node υ a and passing through the link (υ a , υ b ), should be greater than or equal to the completion time of task τ j , which is executed at node υ a .
Building upon the principles presented in (13), Equation ( 14) addresses the process concerning the incoming streams at the listener nodes and the corresponding tasks executed at these nodes.It is essential that tasks at any listener node are not initiated prior to the receipt of the corresponding incoming stream.This means that the offset of task τ j located at node υ b must greater than or equal to the end-time of stream σ i reaching node υ b and transiting through the link (υ a , υ b ).
Equation (15) guarantees that no overlap occurs between any two different tasks τ 1 and τ 2 , when executed at the same end-system υ a .

B. OBJECTIVE FUNCTIONS
Within the structure of our optimization framework, we focus on stream routing and the scheduling of both streams and tasks, while taking into account the priorities associated with the different types of these streams.To facilitate this, we introduce two objective functions in this section: one related to routing and the other concerning scheduling.

1) ROUTING OBJECTIVE FUNCTION
The objective of routing optimization explored in this paper is to minimize the total sum of the lengths of the routes of all streams, as depicted in (16).For each node, if the successor node is not nil, it will be included in the total nodes accounted for the stream σ i , excluding the originating talker node.The values obtained are then summed for each stream and subsequently minimized.
2) SCHEDULING OBJECTIVE FUNCTION In our study, the objective of scheduling optimization is to minimize the sum of the latencies for all transmissions, as delineated in (17).As illustrated in Fig. 6, latency refers to the duration of a transmission, beginning at the initiation point (offset) of the task τ talker at talker node es τ talker and taking into account the time required for stream transmission until the completion (end-time) of the task τ listner at talker node es τ listener .min where τ listner , τ talker ∈T k ( Each application comprises multiple tasks that are executed at the end-systems, with streams being transferred from one end-system to another via bridges.Every task and stream have a start-time, end-time, and duration, which represent the interval between the initiation and ending points.Through the implementation of this objective, we aim to significantly reduce the total latency, thereby fulfilling one of the primary targets in time-sensitive networks.

V. EXPERIMENTAL EVALUATION
In this section, we assess the performance of our optimization framework using various synthetic problem instances through a setup compatible with our framework.

A. PROBLEM INSTANCES AND EXPERIMENTAL SETUP
For a comprehensive evaluation of the proposed framework, we chose a variety of problem instances representative of characteristics commonly found in realistic automation and autonomous vehicles networks.We categorized these instances into four groups based on the number of bridges used: 6, 12, 18, and 24.Each group comprises six problem instances.These are further divided into two subgroups, each with distinct numbers of end-systems.The first subgroup has a 1.5:1 ratio of end-systems to bridges, while the second has a 2:1 ratio, as detailed in Table 6.
Within this setup, connectivity is established through bi-directional, full-duplex links.These links connect both end-systems to bridges and bridges amongst themselves, operating at a transmission speed of 1 Gbps.In our research, we devised an algorithm to construct network topologies for the specified problem instances.To implement this algorithm, we utilized the NetworkX Python library [36], which aids in establishing graphs that depict the network topologies.Bridge edges connecting to other bridges are chosen at random.We've tweaked the algorithm to ensure that end-systems don't establish direct links with one another.Furthermore, every end-system connects to a minimum of two and a maximum of four bridges, selected at random.
To further diversify our testing scenarios, we vary the data stream configurations for each subgroup.The first subgroup mainly uses TT streams over BE streams, the second has an equal number of both, while the third primarily employs BE streams.This methodology allows for a comprehensive evaluation of our framework across diverse scenarios within a consistent network topology.Notably, around 1:4 of the streams within the same application will be multicast, irrespective of their priorities.
Additionally, the number of tasks ranges from 43 to 224, influenced by the number of streams in each application.These tasks are assigned to the end-systems and display random worst-case execution times, which can be as much as three percent of their designated periods.Finally, we chose to cap stream sizes at figures below the maximum transmission unit of 1500Bytes, selecting sizes at random with the frame overhead taken into account.
Each application has a period set at 1000µs, ensuring a consistent period across all its streams and tasks.Every bridge comes with eight queues.Notably, the first two queues are designated for TT streams, while the last two are reserved for BE streams.The middle four queues remain unused.
Each queue features a gate that functions based on a predetermined gating time specified in the GCL.This gating time is set by the network's application Hyperperiod under evaluation.The gates for the TT queues open right at the start of the Hyperperiod, always at 0µs.They closed after 50%-60% of the Hyperperiod has elapsed.Following this, all gates stay closed for a duration ranging from 0% to 5% of the Hyperperiod.The remaining time within the Hyperperiod is reserved for opening the BE queue gates.
Constraint programming (CP) is widely used, with many tools compatible across different programming languages to tackle CP optimization challenges.In our study, we employed the CP-Sat Solver [37], a component of the OR-Tools suite-an open-source collection developed by Google.We tested all problem instances on a system powered by an Intel Core i7-11370H CPU, running at 3.30 GHz with 8 CPUs and 8GB of memory.

B. NUMERICAL RESULTS AND DISCUSSION
In this subsection, we provide a detailed review of our results for each problem instance.We begin by evaluating the latency, followed by an analysis of the response times.Finally, we compare our response time results with findings from previous related studies.

1) LATENCY EVALUATION
First, we aim to evaluate the overall latency of the transmission.Latency is defined as the time taken from the start of the sender task to the end-time of the last receiver task.Additionally, we evaluate the worst-case latency for both TT and BE streams to guarantee that the latency doesn't impact their reception times.
In Table 7, instances are highlighted where the number BE streams surpasses that of TT streams.We refer to instances of this scenario as BE-dominant.Despite this, TT traffic retains its top priority.The total latency increased rapidly with larger scale instances having a higher count of streams.Due to the larger number of BE streams, the worst-case latency for BE streams is typically higher than that for TT streams in most scenarios.Among all instances, the 12-18-A instance exhibits the greatest difference with a maximum of 14.29%, indicating a TT-dominant advantage.Table 8 addresses instances where the number of TT and BE streams are equal.For this balanced TT-BE scenario, the worst-case latency values are closely matched, indicating an even distribution of the load between the two stream types.The total latency rises based on the scale of the instances.This latency pattern is similar to what we saw with the BE-dominant scenarios mentioned earlier, and it looks like the TT-dominant scenarios in Table 9. TABLE 8.The total latency, the worst-case latency for Time-Triggered (TT) streams, and the worst-case latency for Best Effort (BE) streams, for the balanced TT-BE problem instances.
An observation from Table 9 is that even when TT streams are dominant in the final set of instances, the variance in worst-case latency is small.This suggests that our model consistently delivers stable performance for BE streams, even when paired with the higher-priority TT streams.Such observations provide deeper insight into the system's adeptness at managing varying stream priorities.TABLE 9.The total latency, the worst-case latency for Time-Triggered (TT) streams, and the worst-case latency for Best Effort (BE) streams, for the TT-dominant problem instances.
In our work, the performance of TT streams is crucial.However, we have not overlooked the BE streams.We strive to ensure their transmission with minimum latency and zero loss.We ensure that the dominance of one stream type does not impact the performance of the other.For example, in scenarios where TT streams dominant and guaranteed minimal worst-case latency, the BE streams also exhibit acceptable worst-case latency.
To evaluate latencies across varied dominance scenarios and network scales, we illustrate the distribution of total latencies in Fig. 7.As previously highlighted, larger networks with more streams tend to exhibit greater total latency.However, it is observed that the prevalence of one stream type over another has minimal impact on the total latency.This suggests that our model operates effectively across different scenarios.The most marked difference in total latency for networks of similar scale, but varying stream dominance, is seen in the 12-24 scale scenarios.Here, there is a 100µs discrepancy between the balanced TT-BE and TT-dominant scenarios, representing a 3.94% increase.Notably, this marginal increase doesn't compromise the network's performance.
7. Total Latency comparison across three scenarios: BE-dominant, balanced TT-BE, and TT-dominant scenarios, for various scale instances.

2) RESPONSE TIME EVALUATION
Another key metric we evaluate is the response time, which denotes the duration the model solver requires to identify a feasible solution.The total response time encompasses the complete duration needed for the whole procedure, whereas the scheduling and routing response times refer to the durations the scheduling and routing models respectively need for processing.The total response time, scheduling response time, and routing response time for BE-dominant instances are showcased in Table 10; for balanced TT-BE instances in Table 11; and for TT-dominant instances in Table 12.Much like latency, the response time is decisively influenced by the size of the network.The total response time exhibits minor variations between different scenarios of the same scale.For instance, within the 18-36 scale instances, the maximum difference in response time is 3670ms, observed between the TT-dominant and BE-dominant scenarios.This represents an 8.75% increase for the BE-dominant scenario.Across all instances and various scenarios, the scheduling response time is significantly lower than the routing response time.The differences in scheduling response times among different scenarios are minimal, though it tends to be higher for the TT-dominant scenarios in larger-scale instances.
The response time exhibited a steady increase up to the 24-48 scale scenarios.For instances within this scale, the BE-dominant scenarios displayed a consistent increase, with the total response time rising by 57.5% compared to the prior 24-36 scale.However, when the count of TT streams surged in the balanced TT-BE and TT-dominant scenarios, there was a pronounced escalation in the total response timealmost 7-times for the balanced TT-BE scenarios and nearly 8-times for the TT-dominant scenarios.Notably, the scheduling model maintained a reasonable response time even at this scale, with the substantial increase primarily attributed to the routing model.This emphasizes the idea that an augmented count of TT streams in larger-scale networks can persuade a steeper response time in the routing model, compared to smaller-scale networks.This influences the total response time.Nevertheless, the model remains efficient, yielding optimal solutions for both the routing and scheduling models.
To thoroughly understand the findings of this study, it is crucial to compare them with previous related research.We will compare our response time results with the previous related work presented in [25] (REU-CP).In that research, the authors tackled the joint scheduling and routing problem using the CP model.They provide an open access to their model enabling us to benchmark our response time findings in opposition to theirs.It is worth noting that they use the term ''optimization time'' instead of ''response time'' in their model.We evaluated all the problem instances from Table 6 using the REU-CP's CP model.The comparisons span various scenarios.For a clearer depiction, we've benchmarked the results across our three distinct scenarios.
In Fig. 8, we compare our total response time results from the first group of scenarios, specifically where streams with BE characteristics more than TT streams.The results indicates that OHDSR outperforms in terms of total response time regardless of the network instance scale.With a reduction of up to 37.21% for the smallest scale instance, 6-9-A, our model demonstrates a notable enhancement in total response time.As we scale to larger instances, our model consistently maintains fine performance, exhibiting up to a 6.63% improvement for the 24-48-A instance.
Similar to the first scenario, when the number of both streams are equal, our model still outperforms in terms of total response time.As depicted in Fig. 9, there is a reduction of up to 12.35%, translating to 9674ms, for the instance 24-46-Ba large-scale network with 24 bridges and 48 end-systems.This reduction is more pronounced for the smaller scale instances, reaching up to 50.48% for the instance 6-9-B.This trend of enhanced total response time is also observed in instances dominated by TT streams, as illustrated in Fig. 10.For these scenarios, the OHDSR model registers a decrease of up to 26.36% when compared to the REU-CP model.For the 24-48-B and 24-48-C instances, we previously observed a significant uptick in total response time compared to the smaller scale instances.Yet, the OHDSR model managed to fully process and provide an optimal solution, maintaining a steady response time for the scheduling model.When running these particular instances on the REU-CP model, the cp-sat solver returned an ''UNKNOWN'' value, indicating no solution was identified.Such consistent results for instances executed on the same system, as shown 142776 VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. in Table 13, indicate that the scalability of OHDSR exceeds that of REU-CP.
Scheduling is a crucial element in achieving the objectives of Time-sensitive networking.Consequently, we aim to compare the performance of our scheduling model with that of REU-CP in terms of response time.This comparison will focus on the same group of scenarios we previously used when evaluating total response time.
In contrast to total response time, the scheduling response time of the OHDSR model exhibits more significant improvement.As illustrated in Fig. 11, our model registers a decrease of 272.38ms for the largest scale instance, representing a 32.62% reduction.For the smallest scale instance, the improvement climbs up to 71.26%.This markedly superior performance is also observed in the second group of instances as shown in Fig. 12, which feature a balanced number of both TT and BE streams.Excluding the 24-48-B instance, which lacks a solution in the REU-CP model, our model demonstrates improvements ranging from 28.21% to 65.13% for this set of instances.Lastly, we examine the scheduling response time for the third group of instances depicted in Fig. 13, which predominantly feature streams over BE streams.The OHDSR model maintains its commendable performance when compared with REU-CP, especially for larger scale instances relative to other groups.The 24-48-C instance cannot be compared as it lacks a solution in the REU-CP model.The most expansive instance for which we can make a direct scheduling response time comparison between both models is the 24-36-C instance.Here, our model showcases a 46.86% reduction in time, which stands out as a significant decrease for an instance of its size, especially when benchmarked against the 32.62% and 35.44% achieved by the other two instance groups.All these findings highlight the enhanced performance of the OHDSR model in both total response time and scheduling response time compared to the REU-CP model.

VI. CONCLUSION
The evolution of communication networks has heightened the need for real-time communication in numerous sectors, including industrial automation and autonomous vehicles networks.Time-sensitive networking (TSN) emerged as a novel technology, encompassing a set of standards that facilitate deterministic communication over Ethernet networks.This paper addresses the issue of joint scheduling and routing in TSN through the Optimized Hybrid Deterministic Scheduling and Routing (OHDSR) approach.The approach presented is based on constraint programming, which is employed to optimize routing and scheduling by finding the best solution that satisfies the imposed constraints.Along with the scheduling and routing of Time-Triggered (TT) traffic, Best-Effort (BE) traffic is also considered.The evaluation reveals impressive results concerning the total latency of the traffic and ensures an acceptable worst-case latency for both types of traffic.The experimental evaluation spans various scales of networks with different numbers of bridges and end-systems, and the results highlight the high scalability of our approach.Notably, our model exhibits a markedly improved response time in comparison to similar approaches, with outstanding performance for the scheduling model in terms of response time.

FIGURE 1 .
FIGURE 1.A simplified TSN bridge features queues and gates, Time-Triggered (TT) traffic is allocated to Queue 0 and 1, while Best Effort (BE) traffic is in Queue 6 and 7. Queues 2-5 are unused (N/U).

FIGURE 2 .
FIGURE 2. The network topology of the exemplary problem instance illustrates the route path taken by each stream.

FIGURE 3 .TABLE 4 .
FIGURE 3. The opening and closing times of the gates for each queue, along with the type of traffic permitted for transmission during those open intervals.

FIGURE 4 .
FIGURE 4. The network schedule of the exemplary problem instance for the tasks and streams.

FIGURE 5 .
FIGURE 5.The value of Z σ i υ a for Stream σ 1 from the exemplary problem instance.

FIGURE 6 .
FIGURE 6.The value of Z σ i υ a for Stream σ 1 from the exemplary problem instance.

FIGURE 8 .
FIGURE 8.The total response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the BE-dominant problem instances.

FIGURE 9 .
FIGURE 9.The total response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the balanced TT-BE problem instances.

FIGURE 10 .
FIGURE 10.The total response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the TT-dominant problem instances.

TABLE 13 .
Comparison between our proposed OHDSR model and the related work REU-CP model for the total response time and scheduling response time for the 24-48 scale problem instances.

FIGURE 11 .
FIGURE 11.The scheduling response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the BE-dominant problem instances.

FIGURE 12 .
FIGURE 12.The scheduling response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the BE-dominant problem instances.

FIGURE 13 .
FIGURE 13.The scheduling response time comparison between our proposed OHDSR model and the related work REU-CP [25] model for the BE-dominant problem instances.

TABLE 5 .
The values of the Queue-Gating Time (QGT) for each queue at a particular gating time.

TABLE 7 .
The total latency, the worst-case latency for Time-Triggered (TT) streams, and the worst-case latency for Best Effort (BE) streams, for the BE-dominant problem instances.

TABLE 10 .
The total response time, the scheduling response time, and the routing response time, for the BE-dominant problem instances.

TABLE 11 .
The total response time, the scheduling response time, and the routing response time, for the balanced TT-BE problem instances.

TABLE 12 .
The total response time, the scheduling response time, and the routing response time, for the TT-dominant problem instances.