The Earliest Smooth Release Time for a New Task Based on EDF Algorithm

Although EDF (Earliest Deadline First) algorithm has received extensive study during the past more than 40 years, only a few researchers have published their efforts on the bandwidth transfer between tasks. If current running tasks are compressed to free part of their occupied bandwidth to accommodate new requirements, such as a new task’s insertion, then a basic requirement of this operation is smoothness, that is, no deadline should be missed. Suppose current tasks are immediately compressed at the request time of the new task, in order to guarantee the smoothness, the new task may have to be released later than the request time. An interesting and challenging problem is to find the earliest smooth release time. In this paper, an algorithm to evaluate the earliest release time for single task’s insertion is presented and formally proved. To finish the algorithm, only the deadlines during the transition should be checked, and each of them needs to be checked at most once. A novel experimental approach is adopted and more than 4549320 different tests are implemented to verify the theorems in simulation.


I. INTRODUCTION
Bandwidth transfer and reallocation are often unavoidable in bandwidth limited applications. Consider an Internet network channel. If the bandwidth of the channel is shared by a few users, the Internet may seem very fast. When new consumers request access to the network through the same channel, one or more current users have to free part of their bandwidth.
In embedded devices with limited energy, a low operating frequency is usually selected under a light load condition. The frequency may be raised when the load becomes heavier to meet the time constraints of the system tasks. If a new and urgent task requests to come into a system that is already 100% loaded at the highest frequency, then transferring a certain percentage of the bandwidth from less important current (or old) tasks to the new one is a reasonable decision.
In fact, bandwidth transfer is also worth considering even if the operating frequency is not the highest and the load is not 100%. Suppose the system runs at a frequency f 1 and it is sure that there will be a deadline loss due to the insertion of a new task. One possible choice to avoid the loss is to increase the frequency from f 1 . Another option is to reduce the bandwidth The associate editor coordinating the review of this manuscript and approving it for publication was Vivek Kumar Sehgal . occupied by old tasks and the operating frequency remains unchanged.
In this paper, the bandwidth transfer based on the EDF (Earliest Deadline First) algorithm in real-time applications is discussed [1]. One or more running tasks are compressed to meet the new bandwidth requirements that come from the acceleration of other current tasks and/or the insertion of new tasks [2]. The compression means a task s period is prolonged while its computation time remains unchanged, thus its occupied bandwidth (or utilization) is decreased. On the contrary, to accelerate a task is to shorten its period. It is proved that the acceleration of a current task can be treated as the insertion of an equivalent new task [3]. Therefore, as for as new requirements, we need to discuss the insertion only.
New tasks insertion can be categorized into the modechange problem [4]- [7]. It has three stages as shown in FIGURE 1: the old mode starting from t oldb , the transition process from t r and the new mode from t newb . The request of the insertion occurs at t r and certain current tasks start to be compressed.
If new tasks are inserted at t r immediately, it is known that deadline missing may occur even though the sum of the utilizations of all the tasks, called the total utilization or total bandwidth, does not exceed one [2], [3]. In [8] and [9], it is proved that deadline missing is only possible during the time VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ interval [d min ,d max ) that is part of the transition, where d min and d max represent the earliest and the latest deadline of the current instances of all the old tasks, respectively.
∀t ≥ t r , if new tasks are released at t without causing any deadline missing afterwards, then t is called a smooth insertion time (or smooth release time), denoted by δ. Obviously, finding the earliest smooth insertion time, denoted by δ earliest , is significant and challenging.
A concise formula for calculating δ earliest is given in [3], but it is not guaranteed to be δ earliest . One obvious way to get δ earliest is to do multiple rounds of deadline checks. First we assume δ earliest = t r and check every deadline from d min to d max . If deadline missing is impossible, then we conclude that δ earliest is really equal to t r and no more check is required; otherwise we need next round of deadline checking with the assumption The timestep is the increment of the release time of the new task from the current round to the next. The Smart way presented in [8] shows that the timestep can be greater than one time unit in some cases so that the real δ earliest can be reached quickly. However, each deadline point in [d min ,d max ) may need to be checked multiple times, even if there is only one new task.
Paper Contributions: (i).To get the real δ earliest for the insertion of a new task, an advanced algorithm, denoted as ESITforSNT (Earliest Smooth Insertion Time for Single New  Task), is presented and proved. With ESITforSNT, the deadlines of the tasks in the region [d min ,d max ) need to be checked at most once. This is much smaller than that from the Smart way in many situations. (ii).To verify the correctness of the new algorithm, a novel approach is shown in simulation. Firstly, every experimental task set with its total bandwidth exactly equal to 100% is carefully chosen so that every logical branch in ESITforSNT can be tested, and these task sets may be referenced for other researchers in the future. Secondly, an offline iteration algorithm that is obviously correct, though time-consuming, is used for comparison. It shows that the offline algorithm and ESITforSNT produce the same δ earliest value in every test.
Paper Structure: Section II covers the review of the descriptions of tasks compression. A delaying rule is introduced and an example is provided. Section III presents Theorem 2 and 3, based on which ESITforSNT is designed for calculating δ earliest . Section IV shows the novel experimental approach and the simulation results. Section V discusses the related work and Section VI concludes the paper.

II. SYSTEM MODULE AND AN EXAMPLE
Multiple tasks compression and the calculation of the processor demands of system tasks after compression are recalled in this section [3], [8], [9]. A delaying rule is proposed as the basic means for approaching δ earliest . An example is provided to help in understanding relevant theories. The main symbols used in this paper are summarized in Appendix A.

A. MULTIPLE TASKS' COMPRESSION
As shown in FIGURE 2, the system has m current tasks that form a task set M . ∀τ i (C i , T i ) ∈ M , i ∈ (0, 1, · · · , m − 1), it has its computation time C i , period T i , and utilization U i = C i /T i . The starting point of its current period is t i . At t r , new tasks are requested to be inserted and thus τ i is compressed. Its period increases to T i from T i and utilization decreases to U i = C i /T i . Its remaining computation of the current instance is c i (t r ). The total freed bandwidth from the compression of the tasks in M is equal to m−1 Suppose new tasks constitute a subset J and they are inserted into the system at the same time. The sum of the utilizations of the tasks in J is denoted as U J , and r J is introduced to represent the release time of their first instances, r J ≥ t r . In order to keep the system schedulable in the new mode, it is assumed that For the convenience of the associated descriptions, d min and d max are introduced to represent the earliest deadline and the latest deadline of all the current instances of the tasks in M after compression, respectively, that is, As described above, deadline missing is only possible during [d min ,d max ), which is proved in [8] and [9]. [8] is in English while [9] is in Chinese.
For the convenience of literature indexing, it is better to list several important assumptions of the system model studied in this paper: • Every task is periodic and scheduled in one processor.
Every instance has an implicit deadline.
• The computation time of every instance of a task remains unchanged.
• Every old task is released at t oldb and runs to d max (or after this time point) without pausing.
• The total bandwidth (or total utilization) of the tasks in the system is exactly 100%, both before and after compression.
• The bandwidth transferred from the compression equals that required by new tasks.

B. PROCESSOR DEMANDS AFTER COMPRESSION
The proof of subsequent theorems relies on the processor demand criterion [10], [11]. With the problem of compression, we can evaluate the processor demand of each task from t r on.
∀t ≥ t r , the processor demand of task τ The sum of the processor demands of all the tasks in J and M are indicated with D J (t r , t) and D M (t r , t), respectively. The sum of D J (t r , t) and D M (t r , t) is called the total processor demand, denoted as D total (t r , t).
Then, (t r , t) = D total (t r , t) − (t − t r ) is introduced. According to the processor demand criterion, the deadlines at t are met if and only if (t r , t) is less than or equal to zero. Checking whether (t r , t) is greater than zero or not is called a check. (t r , t) is the value of the check.
The processor demand of new tasks can be calculated with The processor demand of the compressed task τ i should be computed with From t r to t i + T i , the deadline point t i + T i of τ i is met only if τ i is assigned the processor time equal to c i (t r ), thus its processor demand equals c i (t r ) in this interval.

C. A DELAYING RULE
With (3) and (4), the important Theorem 1 is presented and proved in [8].
Theorem 1: With compressing the task set M , suppose that new tasks are released from r J and deadline missing occurs at a time t x in [d min ,d max ), then the insertion should be delayed and we must have That is, the delayed insertion becomes possibly smooth only when the release time is delayed by not less than the value of the check at t x .
In the following descriptions, a task in the task set M shown as in FIGURE 2 is called a M task. Now we give the definitions of deadline points and an important delaying rule based on Theorem 1.

Definition 1 (Deadline Points): A time point when at least one deadline of task instances occurs is called a deadline point. A M task deadline point refers to a time when there is at least one deadline of M task instances. Comparatively, if a deadline of an instance of any new task occurs, then we have a new task deadline point. If some deadlines of different tasks take an identical time value, then this time point is called an overlapped deadline point.
Definition 2 (Delaying Rule): According to Theorem 1, if the check value (t r , t x ) is greater than zero at a deadline point t x ,then δ earliest will not be earlier than r J + (t r , t x ). Therefore, r J should be delayed to r J + (t r , t x ). If t x is a M task deadline point, then it is necessary to do the check at t x again with the delayed r J . If t x is a new task deadline point that moves with the delaying of r J , then t x = t x + (t r , t x ) should be checked. This type of check with t x may take many times to get (t r , t x ) ≤ 0. This approach of check and delay and recheck until (t r , t x ) ≤ 0 is defined as the delaying rule. Also, it is declared that t x passes its check as soon as (t r , t x ) ≤ 0 becomes true.
To get δ earliest using the delaying rule, we start the check with r J = t r and t x = d min . Once every deadline point in the region [d min ,d max ) passes its check, then the newest r J is the real δ earliest .
If (t r , t x ) > 0, it can be seen from the delaying rule that the greater the value of (t r , t x ) is, the more delay the r J will have, then the fewer checks are required for δ earliest . This is the major contribution of Theorem 1.

D. AN EXAMPLE
An example in FIGURE 3 is demonstrated to show the use of the delaying rule to evaluate δ earliest . If there is only one new task τ j (C j , T j ), r j is used to indicate its release time. In this figure, before t r = 8, the system has two tasks: τ 0 (8, 16) and τ 1 (8,16). The total utilization equals one. After t r , T 1 keeps unchanged, but the period of τ 0 is prolonged from 16 to 32 thus a bandwidth of 1/4 is transferred to the new task τ j (1,4). The total utilization remains 100%. It is easy to see d min = T 1 = 16 and d max = T 0 = 32.
If the new task is released at r j = t r = 8 as shown in FIGURE 3(a), the deadline point d min = 16 can not pass its check for the first time because the value of this check is According to the delaying rule, r j = r j + (t r , 16) = 10 is implemented as shown in FIGURE 3(b) and we do the check again. Unfortunately, d min does not pass its second check due to In this way, further delay is needed. The consecutive checks are as follows: • The third check: r j = 11, (t r , 16) = 1. • The fourth check: r j = 12, (t r , 16) = 1. • The fifth check: r j = 13, (t r , 16) = 0. That is to say, five checks in total have to be done at d min . Then four deadlines of the new task have to be checked to get the real δ earliest by Smart way: t = 17, t = 21, t = 25 and t = 29. It is easy to see that each of the four deadlines will pass its check for the first time, thus r j will not be delayed further. Therefore, nine checks are needed to get δ earliest = 13 totally. Using the new algorithm ESITforSNT presented in the next section, however, only one check is enough to reach δ earliest = 13.

III. NEW THEOREMS AND ESITforSNT ALGORITHM
Although the number of the times of the required checks may be reduced by Theorem 1, a deadline point in [d min ,d max ) may need to be checked multiple times even if there is only one new task. In this section, new theorems are presented and proved for single new task s insertion, with which every M task deadline point needs to be checked only once to get δ earliest . Based on these theorems, an advanced algorithm, called ESITforSNT, is provided.

A. NEW THEOREMS
In Figure 3(a), d min = 16, d max = 32, and there is only one M task deadline point in [d min ,d max ). But the new task has four deadline points: t = 16, t = 20, t = 24 and t = 28. Note that these points of the new task will also shift with the delaying of r j .
Based on Theorem 1, we should not only check the M task deadline points in [d min ,d max ), but also check the new task deadline points. Fortunately, this can be simplified with Theorem 2 and 3.
Theorem 2 (Criterion 1): With compressing the task set M for the insertion of single new task τ j (C j , T j ) under EDF, the release time of τ j is labeled by r j . In [d min , d max ), suppose that the set M has n deadline points: d M (0) , d M (1) , · · · , and d M (n−1) . They are arranged in ascending order, that is, ) will also pass their checks.
Proof: See Appendix B. Multiple new task deadline points may exist in the region (d M (k) ,d M (k+1) ). Theorem 2 declares that if the first one passes its check, then the rest of them will. As a result, every deadline point of M tasks and some deadline points of the new task in [d min ,d max ) should be checked to obtain δ earliest . Theorem 3 will indicate that these points should be checked at most once. Take the FIGURE 3(a) as an example. It can be seen that n = 1. We need to check the only deadline point of the M tasks at t = 16 and the first deadline point of τ j at t = 20 in the region [16,32). It is not necessary to check any other deadline points. Note that the point at t = 16 is an overlapped one since both a M task and the new task have a deadline at this point.

Theorem 3 (Criterion 2):
If there is only one new task τ j , then the deadline points d M (k) and d j(k)(0) pass their checks as soon as the current r j is delayed by L (k) (in number of time units) that is calculated according to the following Case 1 or Case 2: Proof: See Appendix C. The CASE 1 of Theorem 3 means that if d M (k) does not pass its check, then the release time r j of the new task should be delayed by a time amount L (k) given by (6) and there is no need to check d j(k)(0) any more.
If d M (k) pass its check, then we come to the CASE 2 of Theorem 3. The check of d j(k)(0) must be done and r j is delayed by (7) according to the check value. Now we use Theorem 3 to discuss the example of FIGURE 3. First let r j = t r = 8. Because the check value (t r , d M (0) ) = (8, 16) = 2 > 0, by using (6) we have Thus we get This shows that only one check is needed to get δ earliest . Remember that nine checks are required with the Smart way, as described before.
In Theorem 2, when multiple M task deadline points exist in the region [d min ,d max ), it is assumed that they are sorted from d M (0) to d M (n−1) according to the times they appear. Although this assumption is helpful for the description of the theorem, sorting causes some overhead. Fortunately, Lemma 1 declares that δ earliest can be calculated without sorting.
Lemma 1: As for all the M task deadline points in [d min , d max ), i.e., d M (0) , d M (1) , · · · , and d M (n−1) , δ earliest is not influenced by the priority order of their checks.
Proof: See Appendix D.

B. ESITforSNT ALGORITHM
Now we start to design a new algorithm, named ESITforSNT, to calculate δ earliest based on Theorem 2 and 3. In [d min ,d max ), the checking process can be implemented task by task since queuing deadline points is not necessary from Lemma 1. First the deadline points of task τ 0 are checked: t 0 + T 0 , t 0 + 2T 0 , · · · , and t 0 + pT 0 . Here, p is a positive integer and t 0 + pT 0 < d max . Consequently, τ 1 , τ 2 , . . . , and τ m−1 will be checked. If the value of any check is greater than zero, then r j is updated according to Theorem 3. δ earliest takes the value of r j after all the deadline points are checked.
Overlapped deadline points should be checked only once. Therefore, it is necessary to introduce an array flag[ ] to mark whether a deadline point is an overlapped one. A problem is

Algorithm 1 ESITforSNT
The main function: Create a segment(L seg , t segend , t segstart ) 5: mark + + 6: for (i = 0; i < m; i + +) do 7: while (r j < d M [i] < t segend ) do  (3) and (4) (6) 5: else 6: calculate L (k) with (7) 7: end if 8: r j = r j + L (k) 9: return r j that if the region [d min ,d max ) is lengthy, then the capacity of flag[ ] will be too large. To solve this, the region [d min ,d max ) is divided into segments in the algorithm. We do checking segment by segment and flag[ ] needs to deal with one segment only. The starting time point, ending point, and length of a segment are denoted as t segstart , t segend and L seg , respectively. At the beginning, an initial constant is assigned to L seg . Note that the length of the last segment ending with d max may be less than this constant.
In the main function of this algorithm, first d min and d max should be calculated. Then the initialization is implemented (line 2). r j is initialized with t r . An array d M [ ] is introduced to denote M task deadlines.
In the initialization, an initial constant L SEG is assigned to L seg . mark and flag[ ] are cleared, t segstart = d min . Here, mark is prepared to set the value of the element in flag[ ].
A segment is built in line 4. If t segend + L SEG − d max > 0, then L seg = d max − t segend ; otherwise L seg = L SEG . Next, t segend will be added by L seg and t segstart = t segend − L seg . In this way, the first segment starts from d min and the last one ends at d max . The Temporal Complexity: In this algorithm, the most timeconsuming operation is the execution of the sub function deadlinecheck(). In other words, the time required to fulfill the algorithm mainly depends on this sub function. When it is called, (3) and (4) are required to evaluate the processor demands of the tasks and one or two checks are needed: first d M [k] should be checked and the next is Therefore, it is reasonable to measure the temporal complexity of the algorithm with the number of the times of checks. Then, how many checks will be required to get δ earliest ? In Theorem 2, it is assumed that the M task set has n deadline points in [d min ,d max ). Therefore, the worstcase time complexity of this algorithm is equal to 2n times of checks.

IV. EXPERIMENTS
The purpose of the following experiments is to verify Theorem 2, 3 and the ESITforSNT algorithm. In these experiments, the release time of all the M tasks is assumed to be zero, that is, t oldb = 0. The problem is how to implement the verification. A very simple offline and standard algorithm that is obviously correct, though time consuming, is used. In every experiment, the δ earliest values from ESITforSNT and from the offline algorithm are compared. The pseudo code of the standard algorithm is provided in Algorithm 2.
To start Algorithm 2, first r j = t r is assumed. All the tasks are scheduled and the system simply runs from t r to d max . If deadline missing occurs after t r , then r j = r j + 1 is done and the system runs from t r to d max again. If no deadline is missed till d max , then the algorithm ends with δ earliest = r j . The correctness of this process is obvious, thus this algorithm can be used as a standard algorithm. Remember the time sequence Let T LCM (0∼m−1) be the least common multiple of the periods of the M tasks before compression. If t r = 0 and t r = T LCM (0∼m−1) , obviously, the new task can be smoothly inserted immediately at t r . Therefore, for a set of M tasks, we only do the experiments with the cases from t r = 1 to t r = T LCM (0∼m−1) − 1. With each case, we compare the δ earliest value obtained from ESITforSNT with that from the standard algorithm.
For the convenience of describing a bandwidth transfer process, Definition 3 is provided first.

Algorithm 2 Offline or Standard Algorithm
The main function: 1: Find out d max 2: Schedule M tasks from t = 0 to t r 3: Compress M tasks and set r j = t r 4: loop: 5: Schedule M tasks from t = t r to r j 6: Schedule M tasks and τ j from t = r j to d max 7: if(deadline missing occurs) then 8: r j + + 9: go to loop 10: end if 11: δ earliest = r j

Definition 3 (Freed Bandwidth Ratio): The Freed bandwidth ratio from a M task τ
In the experiments, tasks are configured mainly depending on p freed(i) . TABLE 1 is an initial configuration of task parameters. The new task is τ j (3, 5) with a bandwidth U j = 0.6. There are four tasks in the M task set in total: τ 0 , τ 1 , τ 2 and τ 3 . Before compression, we have U 1 = 0.4 and U 2 = 0.4. After compression, each of the two tasks frees its bandwidth according to a ratio of 0.75, that is, p freed(1) = p freed(2) = 0.75.
Then, we get the bandwidth U j of the new task with an expression, called the bandwidth allocation expression: The period T 0 of τ 0 takes 201 integer values from 50 to 250. Accordingly, C 0 has 201 integer values.
For τ 3 , its period T 3 takes the value of the least common multiple of T 0 , T 1 and T 2 , denoted as T LCM (0∼2) . The values of C 3 and C 0 are calculated as follows: In this way, U 0 + U 3 = 0.2 is guaranteed. The total number of the M task sets generated from TABLE 1 is 201. These task sets are listed in TABLE 2. Now we give the reasons for the above configuration: • The bandwidth of τ 3 is very small (about 0.005). It is used together with τ 0 to produce U 0 + U 3 = 0.2 so that both U and U (the total bandwidths before and after compression) are exactly equal to 1. Thus ESITforSNT can be verified under the condition of exactly 100% load. In addition, we set T 3 = T LCM (0∼2) to produce T LCM (0∼3) = T 3 , that is to say, the least common multiple of the periods of all the M tasks is T 3 . Thus, the number of the possible values of t r (from 1 to T LCM (0∼3) − 1) is controllable and not too large.
• τ 1 and τ 2 are configured with relatively large freed bandwidth ratios. Both p freed (1) and p freed (2) are 0.75. The reason for this is that a transfer with large bandwidth is easy to cause δ earliest > t r . This is good for the verification. With an unreasonable configuration, on the contrary, there may be fewer or even no cases with δ earliest > t r .
• A relatively short period, T j = 5, is assigned to the new task τ j . With a constant U j , the shorter the T j is, the greater processor demand the τ j has in a given time region, and the more likely the deadline will be lost. Therefore, a new task with a short period is used in simulation.
• τ 0 does not transfer any bandwidth and its period is changeable. Let us compare T 0 with T 1 and T 2 . When T 0 takes a value in [50,120) (region 1), it has the shortest value, T 0 < T 1 < T 2 . When T 0 takes a value in [120,180) (region 2), it is in the middle,    The Experimental Results: With the 201 different sets of M tasks, there are 4549320 tests for 4549320 δ earliest values. For a specific M task set, every test is done with a different t r value. In each test, an identical δ earliest value is obtained from ESITforSNT and from the standard algorithm.
Different task configurations from TABLE 1 are also used to test ESITforSNT. All the experimental results show that the above theorems are correct.

V. RELATED WORK
There are many papers related with mode-change problems [4]- [6]. A mode-change is initiated whenever a significant change in the internal state or an event from the environment is detected. The reasons for changing the operational modes are well listed in [12]. There are four basic requirements: schedulability, promptness, periodicity, and consistency [13]. To meet the four basic requirements, two points have been emphasized by researchers: the protocol and the offset.
Two major types of protocols are synchronous and asynchronous [12], [14], [15]. With synchronous protocols, newmode tasks (allowed to be released after t r ) can not be released until all the old-mode tasks (only released in the old mode) have completed their last activations, while with asynchronous protocols, new-mode and old-mode tasks can be executed at the same time during the transition process. A comparison is made between these two [13]. It is pointed out that synchronous protocols are generally simple and require no specific schedulability analysis. They do not give good promptness. Asynchronous protocols, however, often provide a faster response to mode-change requests, and some of them provide periodicity. But they need a specific schedulability analysis.
The offset is the time delay a protocol may impose to the first release of a new-mode task after t r . In this paper, we take offset= δ earliest − t r .
Sometimes the feasibility of a mode-change highly relies on finding the offset [3], [16]. Some researchers use the offset in their models or point out that it is important, others try to find a way to calculate it [5], [13]. In [17], for example, an asynchronous protocol that uses offsets was provided by Pedro and Burns, but no way to calculate such offsets is given. In [13], it is declared that how to calculate the offset is an open problem and an iterative method based on fixed priorities is presented. The offsets of new-mode tasks are chosen from possible maximum values to minimum. If a transition is not feasible with the selected values, then shorter values are tried until the minimum values for consistency are exactly the same as those from the previous iteration.
In [18], dynamic voltage scaling with RM (Rate Monotonic) and EDF are studied. From the simulations, Pillai et al. VOLUME 8, 2020 note an interesting phenomenon, that is, the dynamic addition of a task to a task set may cause transient missed deadlines unless one is very careful. However, the temporal complexity of such an insertion is not analyzed.
The mode-change based on EDF is also discussed in the case studies with video streams [16]. An iterative method is provided to calculate the offset. Given a length of time, for example 400ms, schedulability is first checked with an assumed offset = 0. If the system is not schedulable, then the analysis is performed with different sizes of assumed values which are chosen by binary search. Analysis stops when the smallest value is found that makes the system schedulable. This iterative method depends on the processor demand criterion for EDF. It is not well suitable for use at run-time due to the logarithmic complexity. Additionally, further studies are needed to define the length of the transition process and to find the time region in which deadline missing is possible.
A protocol to handle the admission control is provided in [19]. The framework can deal with overlapping scheduling transients and sporadic tasks. But the model of [19] is quite different from that of this paper in which bandwidth transfer between tasks is discussed.
Determining task shares on processors are discussed in [20]. A task has an initial weight (bandwidth). This weight may be increased or decreased, which means a task can be accelerated or decelerated actually. However, the earliest time to start this operation without deadline missing is not discussed in [20].
Andersson claims that it is unfortunate that the research literature offers no mode-change protocol and corresponding schedulability analysis for a processor scheduled by EDF, and then he presents an analysis for this problem based on a rule that task τ i switches from its old mode to a new mode at the next release time if the beginning of the current instance is earlier than t r [12]. This rule is different from the compression shown in FIGURE 2 of this paper, where τ i increases its period immediately at t r , which is better for promptness.
The most relevant works to the model of this paper are [2], [3], [8], [9]. Buttazzo et al. present an elastic scheduling model for the task set based on EDF, in which the compression, the acceleration, and the insertion are discussed [2], [21]. An insertion time δ = (t i + T i ) − c i (t r )/U i is provided. A deeper research is made by Qian and an earlier smooth time [3], but there is no guarantee for δ = δ earliest .
The problem of multiple tasks compression is studied in [8] and [9]. It is proved that deadline missing is only possible in [d min ,d max ). Theorem 1 is also presented in [8]. This theorem indicates that the time step from the current check to the next may be greater than one so that δ earliest can be reached quickly.

VI. CONCLUSION
In summary, with the bandwidth transfer from multiple periodic tasks scheduled with EDF, the following important points are shown in this paper: • Some important conclusions are recalled. For example, deadline missing is only possible in [d min ,d max ), which is proved in [8] and [9].
• To get δ earliest for inserting a new task with ESITforSNT algorithm, only one check is needed for each M task deadline point. Totally suppose M tasks have n deadline points in [d min ,d max ), then the number of the times of the checks required for δ earliest will not be greater than 2n.
• The experiments of the bandwidth transfer are specially designed. Firstly, a standard algorithm is utilized. Although it is time consuming, its correctness is obvious. Every experimental δ earliest obtained from ESITforSNT is compared with the one from the standard. Secondly, effective task sets used in experiments are configured. In the experiments, configuring task sets randomly is not a good way for the type of bandwidth transfer. Therefore, typical, accurate and wellselected configurations are adopted and more than 201 task sets are tested. These configurations are not only effective in the verification of the current theorems, but also convenient for the comparisons in future studies.  (1) , the new task increases its processor demand while a M task does not. Thus we have This indicates that d j(k) (1) also pass its check. Similarly, other deadline points of τ j after d j(k)(1) (if any) in the region (d M (k) ,d M (k+1) ) will also pass their checks.

APPENDIX C PROOF OF THEOREM 3
Proof: There are two cases: Case 1 and Case 2. Proof of Case 1: In this case, due to (t r , d M (k) ) > 0, r j should be delayed by L (k) . The delaying will of course decrease the processor demand D j (t r , d M (k) ).
If (t r , d M (k) ) > C j , L (k) must be longer than the length of T j to make D j (t r , d M (k) ) reduced enough for (t r , d M (k) ) ≤ 0. As shown in FIGURE 4, the delaying is implemented by three steps: the first delay, the second delay and the third delay. The amount of delay by each step is denoted by L (k) | 1 , L (k) | 2 , and L (k) | 3 , respectively. Naturally For the purpose of convenience, with a discussed time t, D total (t r , t)| 1 , D total (t r , t)| 2 and D total (t r , t)| 3 are used to represent the total processor demands in (t r , t] after the first, the second, and the third step, respectively. Correspondingly, Their check values are (t r , t)| 1 , (t r , t)| 2 and (t r , t)| 3 . Before delaying, we have After the first step delay, the release of τ j is delayed by L (k) | 1 so that d jy shifts to d jy , and d jx to d jx = d M (k) . Then Notice that the total demand D total (t r , d M (k) ) remains unchanged when the first step delay is completed, thus we have D total (t r , d M (k) )| 1 = D total (t r , d M (k) ), and Through the second step delay, r j is increased by L (k) | 2 , and d jy will move to d jy to produce The purpose of this step is to make the check value become less than or equal to C j : D total (t r , d M (k) )| 2 = D total (t r , d M (k) )| 1 − L (k) | 2 T j C j , and (t r , d M (k) )| 2 = (t r , d M (k) )| 1 − L (k) | 2 T j C j ≤ (t r , d M (k) )| 1 − (t r , d M (k) )| 1 − C j C j C j = C j . VOLUME 8, 2020 Now discuss the third step delay. The deadline d jy will shift to d jy . The check with d M (k) becomes passed as soon as L (k) | 3 > 0 since (t r , d M (k) )| 2 ≤ C j . However, L (k) | 3 > 0 may not be enough for d jy to pass its check. According to the delaying rule, in order to make both d M (k) and d jy passed, the minimum value of L (k) | 3 should be calculated by If d jy becomes equal to or greater than d M (k+1) due to the third step delay, then d jy will be checked when we do the check with the next time interval [d M (k+1) , d M (k+2) ), or some interval after that. Even this happens, L (k) | 3 must not be less than the value given by (11) according to Theorem 1. Therefore, (11) is correct in any case when we do the check with d M (k) .
In addition, if d M (k) equals d jx , then we get L (k) | 1 = 0 from (9) and the first step delay is not needed. Also, if (t r , d M (k) ) ≤ C j , then we have L (k) | 2 = 0 from (10) and the second step delay is omitted.
If d j(k)(0) ≥ d M (k+1) after the delay, then d j(k)(0) will be checked when we do the check with d M (k+1) , or deadline points after d M (k+1) . Even this happens, L (k) must not be less than the value given by (12) according to Theorem 1. Therefore, (12) is correct anyhow when we do the check with d j(k)(0) .

APPENDIX D PROOF OF LEMMA 1
Proof: Start with r j = t r . let L (k)first be the delay of r j to make the deadline point d M (k) to pass its check if this point is checked before all other deadline points. The maximum value of all these delays is denoted as L (k)first_max , that is, L (k)first_max = max{L (0)first , L (1)first , · · · , L (n−1)first }.
Obviously, δ earliest depends on L (k)first_max only. That is to say, the total amount of delay of r j must be equal to L (k)first_max to make the checks of all the points passed, no matter which point is checked first.