Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Real-Time Systems Symposium (RTSS), 2011 IEEE 32nd

Date Nov. 29 2011-Dec. 2 2011

Filter Results

Displaying Results 1 - 25 of 44
  • [Front cover]

    Publication Year: 2011 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (145 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2011 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (18 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2011 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (61 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2011 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (124 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2011 , Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (139 KB)  
    Freely Available from IEEE
  • Message from the Chairs

    Publication Year: 2011 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2011 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (82 KB)  
    Freely Available from IEEE
  • Technical Program Committees

    Publication Year: 2011 , Page(s): xi - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (96 KB)  
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2011 , Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • Certification-Cognizant Time-Triggered Scheduling of Mixed-Criticality Systems

    Publication Year: 2011 , Page(s): 3 - 12
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    In many modern embedded platforms, safety-critical functionalities that must be certified correct to very high levels of assurance co-exist with less critical software that are not subject to certification requirements. Recent research in real-time scheduling theory has yielded some promising techniques for meeting the dual goals of (i) being able to certify the safety-critical functionalities under very conservative assumptions, and (ii) ensuring high utilization of platform resources under less pessimistic assumptions. This research has centered on an event-triggered/ priority-driven approach to scheduling. However current practice in many safety-critical domains, including (the safety-critical components of) automotive and avionics systems and factory automation, favors a time-triggered approach. In such time-triggered systems, non-interference of safety-critical components by non-critical ones is ensured by strict isolation between components of different criticalities, although such isolation facilitates the certification of the safety-critical functionalities, it can cause very low resource utilization. The research reported in this document is, to our knowledge, the first to study time-triggered scheduling from the perspective of both ensuring certifiability of high-criticality functionalities, and obtaining high resource utilization as in (i) and (ii) above. We present algorithms for time-triggered scheduling of mixed-criticality systems that offers resource utilization guarantees similar to those of event-triggered scheduling. Since the time-triggered approach currently seems to find greater acceptability with certification authorities, it is hoped that this research will hasten the adoption of these results in building embedded systems that are subject to mandatory certification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective and Efficient Scheduling of Certifiable Mixed-Criticality Sporadic Task Systems

    Publication Year: 2011 , Page(s): 13 - 23
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    An increasing trend in embedded system design is to integrate components with different levels of criticality into a shared hardware platform for better cost and power efficiency. Such mixed-criticality systems are subject to certifications at different levels of rigorousness, for validating the correctness of different subsystems on various confidence levels. The real-time scheduling of certifiable mixed-criticality systems has been recognized to be a challenging problem, where using traditional scheduling techniques may result in unacceptable resource waste. In this paper we present an algorithm called PLRS to schedule certifiable mixed-criticality sporadic tasks systems. PLRS uses fixed-job-priority scheduling, and assigns job priorities by exploring and balancing the asymmetric effects between the workload on different criticality levels. Comparing with the state-of-the-art algorithm by Li and Baruah for such systems, which we refer to as LB, PLRS is both more effective and more efficient: (i) The schedulability test of PLRS not only theoretically dominates, but also on average significantly outperforms LB's. (ii) The run-time complexity of PLRS is polynomial (quadratic in the number of tasks), which is much more efficient than the pseudo-polynomial run-time complexity of LB. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design Optimization of Mixed-Criticality Real-Time Applications on Cost-Constrained Partitioned Architectures

    Publication Year: 2011 , Page(s): 24 - 33
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (758 KB) |  | HTML iconHTML  

    In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks of different SILs can share a partition only if they are all elevated to the highest SIL among them. Such elevation leads to increased development costs. We are interested to determine (i) the mapping of tasks to processors, (ii) the assignment of tasks to partitions, (iii) the sequence and size of the time slots on each processor and (iv) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real-life benchmarks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Response-Time Analysis for Mixed Criticality Systems

    Publication Year: 2011 , Page(s): 34 - 43
    Cited by:  Papers (34)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    Many safety-critical embedded systems are subject to certification requirements. However, only a subset of the functionality of the system may be safety-critical and hence subject to certification, the rest of the functionality is non safety-critical and does not need to be certified, or is certified to a lower level. The resulting mixed criticality system offers challenges both for static schedulability analysis and run-time monitoring. This paper considers a novel implementation scheme for fixed priority uniprocessor scheduling of mixed criticality systems. The scheme requires that jobs have their execution times monitored (as is usually the case in high integrity systems). An optimal priority assignment scheme is derived and sufficient response-time analysis is provided. The new scheme formally dominates those previously published. Evaluations illustrate the benefits of the scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anytime Algorithms for GPU Architectures

    Publication Year: 2011 , Page(s): 47 - 56
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2373 KB) |  | HTML iconHTML  

    Most algorithms are run-to-completion and provide one answer upon completion and no answer if interrupted before completion. On the other hand, anytime algorithms have a monotonic increasing utility with the length of execution time. Our investigation focuses on the development of time-bounded anytime algorithms on Graphics Processing Units (GPUs) to trade-off the quality of output with execution time. Given a time-varying workload, the algorithm continually measures its progress and the remaining contract time to decide its execution pathway and select system resources required to maximize the quality of the result. To exploit the quality-time tradeoff, the focus is on the construction, instrumentation, on-line measurement and decision making of algorithms capable of efficiently managing GPU resources. We demonstrate this with a Parallel A* routing algorithm on a CUDA-enabled GPU. The algorithm execution time and resource usage is described in terms of CUDA kernels constructed at design-time. At runtime, the algorithm selects a subset of kernels and composes them to maximize the quality for the remaining contract time. We demonstrate the feedback-control between the GPU-CPU to achieve controllable computation tardiness by throttling request admissions and the processing precision. As a case study, we have implemented AutoMatrix, a GPU-based vehicle traffic simulator for real-time congestion management which scales up to 16 million vehicles on a US street map. This is an early effort to enable imprecise and approximate real-time computation on parallel architectures for stream-based time-bounded applications such as traffic congestion prediction and route allocation for large transportation networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RGEM: A Responsive GPGPU Execution Model for Runtime Engines

    Publication Year: 2011 , Page(s): 57 - 66
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sleepy Sloth: Threads as Interrupts as Threads

    Publication Year: 2011 , Page(s): 67 - 77
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Event latency is considered to be one of the most important properties when selecting an event-driven real-time operating system. This is why in previous work on the Sloth kernel, we suggested treating threads as ISRs -- executing all application code in an interrupt context -- and thereby reducing event latencies by scheduling and dispatching solely in hardware. However, to achieve these benefits, Sloth does not support blocking threads or ISRs, but requires all control flows to have run-to-completion semantics. In this paper, we present Sleepy Sloth, an extension of Sloth that provides a new generalized thread abstraction that overcomes this limitation, while still letting the hardware do all scheduling and dispatching. Sleepy Sloth abolishes the (artificial) distinction between threads and ISRs: Threads can be dispatched as efficiently as interrupt handlers and interrupt handlers can be scheduled as flexibly as threads. Our Sleepy Sloth implementation of the automotive OSEK OS standard provides much more flexibility to application developers while maintaining efficient execution of application control flows. Sleepy Sloth runs on commodity off-the-shelf hardware and outperforms a leading commercial OSEK implementation by a factor of 1.3 to 19. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Execution Stack Management for Hard Real-Time Computation in a Component-Based OS

    Publication Year: 2011 , Page(s): 78 - 89
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (276 KB) |  | HTML iconHTML  

    In addition to predictability, both reliability and security constraints are increasingly important. Mixed criticality, and open real-time systems execute software of different certification and trust levels. To limit the scope of errant behavior in these systems, a common approach is to raise isolation barriers between software components. However, a thread that executes through multiple components computes on execution stacks spread across each component. As these stacks require backing memory, each component has a finite amount of execution stacks. In this paper, we treat these stacks as shared resources, and investigate the implementation of traditional resource sharing protocols in a real component-based system. We implement multi-resource versions of the Priority Inheritance Protocol (PIP) and Priority Ceiling Protocol (PCP) for these shared stacks and find -- surprisingly -- that neither provide better schedulability characteristics than the other for all system parameterizations. Additionally, we identify the relationship between allocating additional stacks to components, and system schedulability. Given this, we describe and evaluate algorithms to ensure system schedulability while seeking to minimize the amount of memory consumed for stacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Soft Real-Time on Multiprocessors: Are Analysis-Based Schedulers Really Worth It?

    Publication Year: 2011 , Page(s): 93 - 103
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB) |  | HTML iconHTML  

    The evolution of multicore platforms has led to much recent work on multiprocessor scheduling techniques for soft real-time workloads. However, end users routinely run such workloads atop general-purpose operating systems with seemingly good results, albeit typically on over-provisioned systems. This raises the question: when, if ever, is the use of an analysis-based scheduler actually warranted? In this paper, this question is addressed via a video-decoding case study in which a scheme based on the global earliest-deadline-first (GEDF) algorithm was compared against Linux's CFS scheduler. In this study, the GEDF-based scheme proved to be superior under heavy workloads in terms of several timing metrics, including jitter and deadline tardiness. Prior to discussing these results, an explanation of how existing GEDF-related scheduling theory was applied to provision the studied system is given and various "mismatches" between theoretical assumptions and practice that were faced are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor

    Publication Year: 2011 , Page(s): 104 - 115
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (285 KB) |  | HTML iconHTML  

    Optimal multiprocessor real-time schedulers incur significant overhead for preemptions and migrations. We present RUN, an efficient scheduler that reduces the multiprocessor problem to a series of uniprocessor problems. RUN significantly outperforms existing optimal algorithms with an upper bound of O(log m) average preemptions per job on m processors (≤ than 3 per job in all of our simulated task sets) and reduces to Partitioned EDF whenever a proper partitioning is found. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Partitioned EDF Scheduling of Sporadic Task Systems

    Publication Year: 2011 , Page(s): 116 - 125
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    The partitioned scheduling of sporadic task systems on identical multiprocessors is considered. This is known to be intractable (NP-hard in the strong sense). A polynomial-time approximation scheme (PTAS) is proposed for sporadic task systems satisfying the additional constraint that for each of the three parameters -- worst-case execution time, relative deadline, and period -- that characterize sporadic tasks, the ratio of the largest value to the smallest value is bounded from above by a constant. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Workload-Aware Partitioning for Maintaining Temporal Consistency upon Multiprocessor Platforms

    Publication Year: 2011 , Page(s): 126 - 135
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (278 KB) |  | HTML iconHTML  

    Deriving deadlines and periods of update transactions for maintaining timeliness and data freshness has long been recognized as an important problem in real-time database research. Despite years of active research, the state of the art only focuses on uniprocessor systems. In this paper, we take a first step of studying the workload-aware temporal consistency maintenance problem upon multiprocessor platforms. We consider the problem of how to partition a set of update transactions to m ≥ 2 processors to maintain the temporal consistency of real-time data objects under earliest deadline first (EDF) scheduling, while minimizing the total workload on m processors. Firstly, we only consider the feasibility aspect of the problem by proposing a polynomial time partitioning scheme, Temporal Consistency Partitioning (TCP), and formally showing that the resource augmentation bound of TCP is (3 - 1/m). Secondly, we address the partition problem globally by proposing a polynomial time heuristic, Density factor Balancing Fit (DBF), where density factor balancing plays a major role in producing workload-efficient partitionings. Finally, we evaluate the feasibility and workload performances of DBF versus other heuristics with comparable quality experimentally. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Energy-Efficient Trap Coverage in Wireless Sensor Networks

    Publication Year: 2011 , Page(s): 139 - 148
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    In wireless sensor networks (WSNs), trap coverage has recently been proposed to tradeoff between the availability of sensor nodes and sensing performance. It offers an efficient framework to tackle the challenge of limited resources in large scale sensor networks. Currently, existing works only studied the theoretical foundation of how to decide the deployment density of sensors to ensure the desired degree of trap coverage. However, the practical issues such as how to efficiently schedule sensor node to guarantee trap coverage under an arbitrary deployment is still left untouched. In this paper, we formally formulate the Minimum Weight Trap Cover Problem and prove it is an NP-hard problem. To solve the problem, we introduce a bounded approximation algorithm, called Trap Cover Optimization (TCO) to schedule the activation of sensors while satisfying specified trap coverage requirement. The performance of Minimum Weight Trap Coverage we find is proved to be at most O(ρ) times of the optimal solution, where ρ is the density of sensor nodes in the region. To evaluate our design, we perform extensive simulations to demonstrate the effectiveness of our proposed algorithm and show that our algorithm achieves at least 14% better energy efficiency than the state-of-the-art solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • WizSync: Exploiting Wi-Fi Infrastructure for Clock Synchronization in Wireless Sensor Networks

    Publication Year: 2011 , Page(s): 149 - 158
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (680 KB) |  | HTML iconHTML  

    Time synchronization is a fundamental service for Wireless Sensor Networks (WSNs). This paper proposes a novel WSN time synchronization approach by exploiting the existing Wi-Fi infrastructure. Our approach leverages the fact that ZigBee sensors and Wi-Fi nodes often occupy the same or overlapping radio frequency bands in the 2.4 GHz unlicensed spectrum. As a result, a ZigBee node can detect and synchronize to the periodic beacons broadcasted by Wi-Fi access points (APs). We experimentally characterize the spatial and temporal characteristics of Wi-Fi beacons in an enterprise Wi-Fi network consisting of over 50 APs deployed in a 300,000 square foot office building. Motivated by our measurement results, we design a novel synchronization protocol called WizSync. WizSync employs advanced Digital Signal Processing (DSP) techniques to detect periodic Wi-Fi beacons and use them to calibrate the frequency of native clocks. WizSync can intelligently predict the clock skew and adaptively schedules nodes to sleep to conserve energy. We implement WizSync in TinyOS 2.1x and conduct extensive evaluation on a testbed consisting of 19 TelosB motes. Our results show that WizSync can achieve an average synchronization error of 0.12 milliseconds over a period of 10 days with radio power consumption of 50.9 microwatts/node. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving Link Quality by Exploiting Channel Diversity in Wireless Sensor Networks

    Publication Year: 2011 , Page(s): 159 - 169
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    A large percentage of links in low-power wireless sensor networks are of intermediate quality. To the best of our knowledge, opportunistic exploitation is currently the only way to use these links. However, such exploitation requires overhearing which consumes a significant amount of energy. In this paper, we propose a new approach to exploit intermediate quality (IQ) links through channel diversity with a new protocol, called IQ Link Transformation Protocol (ILTP), that does not require overhearing. ILTP transforms IQ links into good links thus allowing us to exploit such links continuously rather than using them only opportunistically. Our key insight is that the packet reception ratios (PRR) across different channels on IQ links are not correlated and it is common on such links to find channels that change in quality on the time scale of a few minutes. Consequently, when the link quality of a channel is bad, it is highly likely that a good channel can be found and its quality will remain good for at least a few minutes. Our evaluations on three large-scale test beds demonstrate that ILTP is able to consistently transform the IQ links into good links. We observe that even a poor link with a PRR of 0.05 can be transformed into a good link with a PRR greater than 0.9. When ILTP is integrated with CTP, the default collection tree protocol for TinyOS, the average number of transmissions per end-to-end packet delivery is reduced by 24% to 58%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • WiCop: Engineering WiFi Temporal White-Spaces for Safe Operations of Wireless Body Area Networks in Medical Applications

    Publication Year: 2011 , Page(s): 170 - 179
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1064 KB) |  | HTML iconHTML  

    ZigBee and other wireless technologies operating in the (2.4GHz) ISM band are being applied in Wireless Body Area Networks (WBAN) for many medical applications. However, these low duty cycle, low power, and low data rate medical WBANs suffer from WiFi co-channel interferences. WiFi interference can lead to longer latency and higher packet losses in WBANs, which can be particularly harmful to safety-critical applications with stringent temporal requirements. Existing solutions to WiFi-WBAN coexistence either require modifications to WiFi or WBAN devices, or have limited applicability. In this paper, by exploiting the Clear Channel Assessment (CCA) mechanisms in WiFi devices, we propose a novel policing framework, WiCop, that can effectively control the temporal white-spaces between WiFi transmissions. Specifically, the WiCop Fake-PHY-Header policing strategy uses a fake WiFi PHY preamble-header broadcast to mute other WiFi interferers for the duration of WBAN active interval, while the WiCop DSSS-Nulling policing strategy uses repeated WiFi PHY preamble (with its spectrum side lobe nulled by a band-pass filter) to mute other WiFi interferers throughout the duration of WBAN active interval. The resulted WiFi temporal white-spaces can be utilized for delivering low duty cycle WBAN traffic. We have implemented and validated WiCop on SORA, a software defined radio platform. Experiments show that with the assistance of the proposed WiCop policing schemes, the packet reception rate of a ZigBee-based WBAN can increase by up to 43.8% in presence of a busy WiFi interferer. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.