By Topic

Real-Time Systems, 2009. ECRTS '09. 21st Euromicro Conference on

Date 1-3 July 2009

Filter Results

Displaying Results 1 - 25 of 36
  • [Front cover]

    Publication Year: 2009 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (110 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2009 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2009 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2009 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (104 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2009 , Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (200 KB)  
    Freely Available from IEEE
  • Message from the Program Chair

    Publication Year: 2009 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2009 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB)  
    Freely Available from IEEE
  • list-reviewer

    Publication Year: 2009 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • Real-Time Communication Analysis with a Priority Share Policy in On-Chip Networks

    Publication Year: 2009 , Page(s): 3 - 12
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1982 KB) |  | HTML iconHTML  

    Wormhole switching with fixed priority preemption has been proposed as a possible solution for real-time on-chip communication. However, the hardware implementation cost is expensive and hence constrains its practical deployment. To address this problem, we propose a new solution by utilizing a priority share policy to reduce the resource overhead while still achieving the hard real-time service guarantees. The composite model-based schedulability analysis technique and relevant priority allocation scheme are represented in this paper. Experiment results show that significant resource saving can be achieved with no performance degradation in terms of missed deadlines. By using this approach, a broad class of real-time communication with different QoS requirements can be explored and developed in a SoC/NoC communication platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • End-to-End Delay Analysis of Distributed Systems with Cycles in the Task Graph

    Publication Year: 2009 , Page(s): 13 - 22
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    A significant problem with no simple solutions in current real-time literature is analyzing the end-to-end schedulability of tasks in distributed systems with cycles in the task graph. Prior approaches including network calculus and holistic schedulability analysis work best for acyclic task flows. They involve iterative solutions or offer no solutions at all when flows are non-acyclic. This paper demonstrates the construction of the first generalized closed-form expression for schedulability analysis in distributed task systems with non-acyclic flows. The approach is a significant extension to our previous work on schedulability in directed acyclic graphs. Our main result is a bound on end-to-end delay for a task in a distributed system with non-acyclic task flows. The delay bound allows one of several schedulability tests to be performed. Evaluation shows that the schedulability tests thus constructed are less pessimistic than prior approaches for large distributed systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Refactoring Asynchronous Event Handling in the Real-Time Specification for Java

    Publication Year: 2009 , Page(s): 25 - 34
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB) |  | HTML iconHTML  

    The primary goal for asynchronous event handling (AEH) in the Real-Time Specification for Java (RTSJ) is to have a lightweight concurrency mechanism. However the RTSJ neither provides a well-defined guideline on how to implement AEH nor requires the documentation of the AEH model used in the implementation. Also the AEH API in the RTSJ are criticized as lacking in configurability as they do not provide any means for programmers to have fine control over the AEH facilities, such as the mapping between real-time threads and handlers. For these reasons, it needs the refactoring of its application programming interface (API) to give programmers more configurability. This paper, therefore, proposes a set of AEH related classes and interfaces to enable flexible configurability over AEH components. We have implemented the refactored configurable AEH API using the new specifications on an existing RTSJ implementation and this paper shows that it allows more configurability for programmers than the current AEH API in the RTSJ does. Consequently programmers are able to specifically tailor the AEH subsystem to fit their applicationspsila particular needs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining Worst-Case Timing Models, Loop Unrolling, and Static Loop Analysis for WCET Minimization

    Publication Year: 2009 , Page(s): 35 - 44
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (247 KB) |  | HTML iconHTML  

    Program loops are notorious for their optimization potential on modern high-performance architectures. Compilers aim at their aggressive transformation to achieve large improvements of the program performance. In particular, the optimization loop unrolling has shown in the past decades to be highly effective achieving significant increases of the average-case performance. In this paper, we present loop unrolling that is tailored towards real-time systems. Our novel optimization is driven by worst-case execution time (WCET) information to effectively minimize the program's worst-case behavior. To exploit maximal optimization potential, the determination of a suitable unrolling factor is based on precise loop iteration counts provided by a static loop analysis. In addition,our heuristics avoid adverse effects of unrolling which result from instruction cache overflows and the generation of additional spill code. Results on 45 real-life benchmarks demonstrate that aggressive loop unrolling can yield WCET reductions of up to 13.7% over simple, naive approaches employed by many production compilers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deriving the Worst-Case Execution Time Input Values

    Publication Year: 2009 , Page(s): 45 - 54
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    A Worst-Case Execution Time (WCET) analysis derives upper bounds for execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A major problem with todaypsilas WCET analysis approaches is that there is no feedback on the particular values of the input variables that cause the programpsilas WCET. However, this is important information for the real-time system developer.We present a novel approach to overcome this problem. In particular, we present a method, based on a combination of input sensitive static WCET analysis and systematic search over the value space of the input variables, to derive the input value combination that causes the WCET. We also present several different approaches to speed up the search. Our evaluations show that the WCET input values can be relatively quickly derived for many type of programs, even for program with large input value spaces. We also show that the WCET estimates derived using the WCET input values often are much tighter than the WCET estimates derived when all possible input value combinations are taken into account. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Profinet IO IRT Message Scheduling

    Publication Year: 2009 , Page(s): 57 - 65
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    This paper presents an algorithm that allows one to create a static schedule of the Profinet IO IRT communication, which is an industrial Ethernet protocol standardised in IEC 61158. This algorithm offers an alternative to the available commercial tool, providing comparable results regarding the resulting time schedule length. Furthermore, we extend the problem by useful time constraints providing a greater flexibility with respect to the individual messages. Due to this flexibility, it is possible to place the selected messages in various parts of the communication cycle, to define end-to-end delays, or to increase the computational time available for the main-controller application, for example. The solution is based on a formulation of the Profinet IO IRT scheduling problem in terms of the resource constrained project scheduling with temporal constraints (PS|temp|Cmax). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Utilization Control for Real-Time and Resilient Power Grid

    Publication Year: 2009 , Page(s): 66 - 75
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    Blackouts in our daily life can be disastrous with enormous economic loss. Blackouts usually occur when appropriate corrective actions are not effectively taken for an initial contingency, resulting in a cascade failure. Therefore, it is critical to complete those tasks that are running power grid computing algorithms in the energy management system (EMS) in a timely manner to avoid blackouts. This problem can be formulated as guaranteeing end-to-end deadlines in a distributed real-time embedded (DRE) system. However, existing work in power grid computing runs those tasks in an open-loop manner, which leads to poor guarantees on timeliness thus a high probability of blackouts. Furthermore, existing feedback scheduling algorithms in DRE systems cannot be directly adopted to handle with significantly different timescales of power grid computing tasks. In this paper, we propose a hierarchical control solution to guarantee the deadlines of those tasks in EMS by grouping them based on their characteristics. Our solution is based on well-established control theory for guaranteed control accuracy and system stability. Simulation results based on a realistic workload configuration demonstrate that our solution can guarantee timeliness for power grid computing and hence help to avoid blackouts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvement to Quick Processor-Demand Analysis for EDF-Scheduled Real-Time Systems

    Publication Year: 2009 , Page(s): 76 - 86
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB) |  | HTML iconHTML  

    Earliest Deadline First (EDF) is an optimal scheduling algorithm for uniprocessor real-time systems. Quick Processor-demand Analysis (QPA) provides efficient and exact schedulability tests for EDF scheduling with arbitrary relative deadline. In this paper, we propose Improved Quick Processor-demand Analysis (QPA*) which is based on QPA. By extensive experiments, we show that QPA* can significantly reduce the required calculations to perform an exact test for unschedulable systems. We prove that the computation time for testing schedulable systems is hardly affected. Hence the required calculations for general systems can be significantly decreased. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximate Bandwidth Allocation for Compositional Real-Time Systems

    Publication Year: 2009 , Page(s): 87 - 96
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    Allocation of bandwidth among components is a fundamental problem in compositional real-time systems. State-of-the-art algorithms for bandwidth allocation use either exponential-time or pseudo-polynomial-time techniques for exact allocation, or linear-time, utilization-based techniques which may over-provision bandwidth. In this paper, we develop a fully-polynomial-time approximation scheme (FPTAS) for allocating bandwidth for sporadic task systems scheduled by earliest-deadline first (EDF) upon an Explicit- Deadline Periodic (EDP) resource. Our algorithm takes, as parameters, the task system and an accuracy parameter epsi > 0, and returns a bandwidth which is guaranteed to be at most a factor (1 + epsi) more than the optimal minimum bandwidth required to successfully schedule the task system. Furthermore, the algorithm has time complexity that is polynomial in the number of tasks and 1/e. Via simulations over randomly-generated task systems, we have observed a several orders of magnitude decrease in runtime and a small relative error when comparing our proposed algorithm with the exact algorithm, even for medium-sized values of epsi (e.g., epsi ap .3). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On-Line Scheduling Algorithm for the Gravitational Task Model

    Publication Year: 2009 , Page(s): 97 - 106
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    Some applications for real-time scheduling have target demands in addition to the commonly used start-time and deadline constraints: a task should be executed at a target point in time for maximum utility, but can execute around this point, albeit at lower utility. Examples for such applications include control and media processing. In this paper, we present a scheduling algorithm for the gravitational task model that we proposed in [1], [2]. This model allows tasks to express their utility as a function of the point of execution and the target point.The proposed scheduler is divided into 2 phases: ordering and timing. The former uses a heuristic to order the jobs in the ready queue and the latter schedules the execution based on the equilibrium [1], [2]. The new ordering heuristic accounts for both acceptance ratio and utility accrual, hence achieving better results than the scheduler proposed in [1], [2]. Besides, the scheduler proposed here uses a heuristic for the ordering phase of complexity O(n times log (n)) and a timing phase of complexity O(n). These complexities represent a significant reduction compared to previous work. Moreover, this paper contains an analysis of the complexity of the proposed scheduler and an evaluation through simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Notion of Useful Cache Block to Improve the Bounds of Cache-Related Preemption Delay

    Publication Year: 2009 , Page(s): 109 - 118
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    In preemptive real-time systems, scheduling analyses are based on the worst-case response time of tasks. This response time includes worst-case execution time (WCET) and context switch costs. In case of preemption, cache memories may suffer interferences between memory accesses of the preempted and of the preempting task. These interferences lead to some additional reloads that are referred to as cache-related preemption delay (CRPD). This CRPD constitutes a large part of the context switch costs. In this article, we focus on the computation of upper bounds on the CRPD using the concept of useful cache blocks (UCB). These are memory blocks that may be in cache before a program point and may be reused after it. When a preemption occurs at that point the number of additional cache-misses is bounded by the number of useful cache blocks. We tighten the CRPD bound by using a modified notion of UCB: only cache blocks that are definitely cached are considered useful by our approach. As we show in this paper, the computed CRPD based on our notion, when used in combination with the bound on the WCET, delivers a safe bound on the execution time in case of preemption. Furthermore the modified definition simplifies the UCB computation for set-associative LRU and data caches. Experimental results show that our approach provides up to 90% tighter CRPD bounds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Precise Worst-Case Execution Time Analysis for Processors with Timing Anomalies

    Publication Year: 2009 , Page(s): 119 - 128
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (253 KB) |  | HTML iconHTML  

    This paper explores timing anomalies in WCET analysis.Timing anomalies add to the complexity of WCET analysis and make it hard to apply divide-and-conquer strategies to simplify the WCET assessment. So far, timing anomalies have been described as a problem that occurs when the WCET of a control-flow graph is computed from the WCETs of its subgraphs, i.e., from a series decomposition. This paper extends the state of the art by (i) showing that timing anomalies can as well occur in a parallel decomposition of the WCET problem, i.e., when complexity is reduced by splitting the hardware state space and performing a separate WCET analysis for hardware components that work in parallel, (ii) proving that the potential occurrence of parallel timing anomalies makes the parallel decomposition technique unsafe (i.e., one cannot guarantee that the calculated WCET bound does not underestimate the WCET), and (iii) identifying special cases of parallel timing anomalies for which the parallel decomposition technique is safe. The latter provides an important hint to hardware designers on their way to constructing predictable hardware components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Randomized Caches in Probabilistic Real-Time Systems

    Publication Year: 2009 , Page(s): 129 - 138
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    While hardware caches are generally effective at improving application performance, they greatly complicate performance prediction. Slight changes in memory layout or data access patterns can lead to large and systematic increases in cache misses, degrading performance. In the worst case, these misses can effectively render the cache useless. These pathological cases, or "cache risk patterns'', are difficult to predict, test or debug, and their presence limits the usefulness of caches in safety critical real-time systems, especially in hard real-time environments. In this paper, we explore the effect of randomized cache replacement policies in real-time systems with stringent timing constrains. We present simulation-based results on representative examples that illustrate the problem of performance anomalies with standard cache replacement policies. We show that, by eliminating dependencies on access history, randomized replacement greatly reduces the risk of these cache-based performance anomalies, enabling probabilistic worst-case execution time analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sustainable Multiprocessor Scheduling of Sporadic Task Systems

    Publication Year: 2009 , Page(s): 141 - 150
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    A scheduling policy or a schedulability test is defined to be sustainable with respect to a particular workload model if any task system represented in that model that is determined to be schedulable remains so if it behaves "better" than mandated by its specifications. We investigate the sustainability properties of global scheduling algorithms when applied to systems represented using the sporadic task model. We show that Fixed-Priority (FP) scheduling of sporadic task sets is sustainable under a variety of scheduling parameter relaxations, including decreased execution requirements, later arrivals, and deadline relaxations. It follows that all sufficient tests of global FP schedulability are sustainable for sporadic task systems. We show that the Earliest Deadline First (EDF) and Earliest-Deadline with Zero Laxity scheduling policies are sustainable with respect to decreased execution requirements and later arrivals. We also introduce a notion of self-sustainability, and show that many widely-used EDF schedulability tests are not self-sustainable but one is. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two Protocols for Scheduling Multi-mode Real-Time Systems upon Identical Multiprocessor Platforms

    Publication Year: 2009 , Page(s): 151 - 160
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    We consider the global and preemptive scheduling problem of multi-mode real-time systems upon identical multiprocessor platforms. Since it is a multi-mode system, the system can change from one mode to another such that the current task set is replaced with a new task set. Ensuring that deadlines are met requires not only that a schedulability test is performed on tasks in each mode but also that (i) a protocol for transitioning from one mode to another is specified and (ii) a schedulability test for each transition is performed. We propose two protocols which ensure that all the expected requirements are met during every transition between every pair of operating modes of the system. Moreover, we prove the correctness of our proposed algorithms by extending the theory about the makespan determination problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Norm Approach for the Partitioned EDF Scheduling of Sporadic Task Systems

    Publication Year: 2009 , Page(s): 161 - 169
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (254 KB) |  | HTML iconHTML  

    In this paper, we propose a new approach for the partitioned Earliest Deadline First (EDF) scheduling of sporadic task systems. We consider the case of constrained task deadlines where the deadlines of the tasks are less than or equal to their periods. We introduce the concept of the EDF norm, for defining the space of WCET values that result in schedulable systems given fixed periods and relative deadlines. Based on this concept, it is possible to derive a necessary and sufficient feasibility condition to check whether EDF scheduling is valid for a given partitioning. The EDF norm has interesting convexity properties that permit using a Linear Programming approach to reduce the number of points at which the EDF norm needs to be checked. From the EDF norm, we derive a New Worst Fit Decreasing partitioning heuristic and compare its performance with two existing partitioning heuristics based on density partitioning and demand bound function approximation. We then compare the performance of the heuristic in terms of the resource augmentation paradigm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictable Runtime Monitoring

    Publication Year: 2009 , Page(s): 173 - 183
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    Dynamic program monitoring has been applied in software-intensive systems to detect runtime constraint violations and trigger system recovery actions. Uncontrolled monitoring activities may, however, delay detection of a violation for an unbounded time and, worse, affect the original system's schedulability. In this paper, we introduce the concept of predictable monitoring, which demands a bound on detection latency while ensuring temporal non-interference by the monitoring process. We present off-line analysis techniques for predicting the maximum detection latency with fixed-priority scheduling under two types of monitoring schemes: synchronous and asynchronous. For asynchronous monitoring, we illustrate how to achieve predictable monitoring by bounding the detection latency and controlling the monitoring budget using a bandwidth-preserving, server-based approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.