By Topic

Real-Time Systems Symposium, 2009, RTSS 2009. 30th IEEE

Date 1-4 Dec. 2009

Filter Results

Displaying Results 1 - 25 of 55
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (12 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (55 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (105 KB)  
    Freely Available from IEEE
  • Table of contents

    Save to Project icon | Request Permissions | PDF file iconPDF (188 KB)  
    Freely Available from IEEE
  • Message from the Chairs

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • Technical Program Committee

    Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): xiii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • On the Benefits of Relaxing the Periodicity Assumption for Networked Control Systems over CAN

    Page(s): 3 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (390 KB) |  | HTML iconHTML  

    A vast majority of control systems require the use of networks for the communication between the different agents: sensors, controllers, and actuators. The existing paradigm regards the messages, between sensors and controllers and between controllers and actuators, as periodic. Although this strategy facilitates the analysis and implementation, it leads to a conservative usage of the communication bandwidth. Based on previous work by the authors, an aperiodic strategy is proposed in this paper for the dynamic allocation of bandwidth according to the current state of the plants and the available resources. The case of control loops closed over Controller Area Networks (CANs) is discussed in detail and illustrated on a train car. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling of Battery Charge, Discharge, and Rest

    Page(s): 13 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    Electric vehicles operate inefficiently with a naive battery management system that charges or discharges battery cells in a pack based solely on application load demands. The battery pack's operation-time and lifetime can be extended significantly by effectively scheduling (the cyber part) battery charge, discharge, and rest activities, based on the battery characteristics (the physical part). We propose a set of policies for scheduling battery-cell activities, called the weighted-k round-robin (kRR) scheduling framework. This framework dynamically adapts battery-cell activities to load demands and the condition of individual cells, thereby extending the battery pack's operation-time and making them robust to anomalous voltage-imbalances. The framework comprises two key components. First, an adaptive filter estimates the upcoming load demand. Then, based on the estimated load demand, the kRR scheduler determines the number of parallel-connected cells to be discharged simultaneously. The scheduler also effectively partitions the cells in the pack, allowing the cells to be simultaneously charged and discharged in coordination with the battery reconfiguration system we developed earlier [17]. Besides the kRR scheduling framework, we characterize the discharge and recovery efficiency of a Lithium-ion battery cell. The kRR scheduling framework is shown to outperform three alternative scheduling mechanisms with respect to the operation-time by 7-56%, and improve the tolerance of voltage-imbalance by up to 50%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Dynamic Power Management for Hard Real-Time Systems

    Page(s): 23 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (362 KB) |  | HTML iconHTML  

    Power dissipation has constrained the performance boosting of modern computer systems in the past decade. Dynamic power management has been widely applied to change the system (or device) state dynamically to reduce the power consumption. This paper explores how to effectively reduce the energy consumption to handle event streams with hard real-time guarantees. We adopt Real-Time Calculus to describe the event arrival and resource service by arrival curves and service curves in the interval domain, respectively. We develop online algorithms to adaptively control the power mode of the device, postponing the processing of arrival events as late as possible. Profited from the worst-case interval-based abstraction, our algorithms can on one hand tackle arbitrary event arrivals (even with burstiness) and on the other hand guarantee hard real-time requirements in terms of both timing and backlog constraints. We also present simulation results to demonstrate the effectiveness of our algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rapid Early-Phase Virtual Integration

    Page(s): 33 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1790 KB) |  | HTML iconHTML  

    In complex hard real-time systems with tight constraints on system resources, small changes in one component of a system can cause a cascade of adverse effects on other parts of the system. We address the inherent complexity of making architectural decisions by raising the level of abstraction at which the analysis is performed. Our analysis approach gives the system architect a rigorous method for quickly determining which system architectures should be pursued, and it allows the architect to track and manage the cascading effects of subsystem/component changes in a comprehensive, quantitative manner. The end product is a virtual architecture analysis that systematically incorporates the inherent coupling among interacting system components that share limited system resources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unified Cache Modeling for WCET Analysis and Layout Optimizations

    Page(s): 47 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    Presence of instruction and data caches in processors create lack of predictability in execution timings. Hard real-time systems require absolute guarantees about execution time, and hence the timing effects of caches need to be modeled while estimating the worst-case execution time (WCET) of a program. In this work, we consider the modeling of a generic cache architecture which is most common in commercial processors - separate instruction and data caches in the first level and a unified cache in the second level (which houses code as well as data). Our modeling is used to develop a timing analysis method built on top of the Chronos WCET analysis tool. Moreover we use our unified cache modeling to develop WCET-driven code and data layout optimizations - where the code and data layout are optimized simultaneously for reducing WCET. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Timing Analysis of Concurrent Programs Running on Shared Cache Multi-Cores

    Page(s): 57 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (686 KB) |  | HTML iconHTML  

    Memory accesses form an important source of timing unpredictability. Timing analysis of real-time embedded software thus requires bounding the time for memory accesses. Multiprocessing, a popular approach for performance enhancement, opens up the opportunity for concurrent execution. However due to contention for any shared memory by different processing cores, memory access behavior becomes more unpredictable, and hence harder to analyze. In this paper, we develop a timing analysis method for concurrent software running on multi-cores with a shared instruction cache. Communication across tasks is by message passing where the message mailboxes are accessed via interrupt service routines. We do not handle data cache, shared memory synchronization and code sharing across tasks. Our method progressively improves the lifetime estimates of tasks that execute concurrently on multiple cores, in order to estimate potential conflicts in the shared cache. Possible conflicts arising from overlapping task lifetimes are accounted for in the hit-miss classification of accesses to the shared cache, to provide safe execution time bounds. We show that our method produces lower worst-case response time (WCRT) estimates than existing shared-cache analysis on a real-world embedded application. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Bypass to Tighten WCET Estimates for Multi-Core Processors with Shared Instruction Caches

    Page(s): 68 - 77
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (606 KB) |  | HTML iconHTML  

    Multi-core chips have been increasingly adopted by the microprocessor industry. For real-time systems to exploit multi-core architectures, it is required to obtain both tight and safe estimates of worst-case execution times (WCETs). Estimating WCETs for multi-core platforms is very challenging because of the possible interferences between cores due to shared hardware resources such as shared caches, memory bus, etc. This paper proposes a compile-time approach to reduce shared instruction cache interferences between cores to tighten WCET estimations. Unlike, which accounts for all possible conflicts caused by tasks running on the other cores when estimating the WCET of a task, our approach drastically reduces the amount of inter-core interferences. This is done by controlling the contents of the shared instruction cache(s), by caching only blocks statically known as reused. Experimental results demonstrate the practicality of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling Hard Real-Time Garbage Collection

    Page(s): 81 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Managed languages such as Java and C# are increasingly being considered for hard real-time applications because of their productivity and software engineering advantages. Automatic memory management, or garbage collection, is a key enabler for robust, reusable libraries, yet remains a challenge for analysis and implementation of real-time execution environments. This paper comprehensively compares the two leading approaches to hard real-time garbage collection. While there are many design decisions involved in selecting a real-time garbage collection algorithm, for time-based garbage collectors researchers and practitioners remain undecided as to whether to choose periodic scheduling or slack-based scheduling. A significant impediment to valid experimental comparison is that the commercial implementations use completely different proprietary infrastructures. Here, we present Minuteman, a framework for experimenting with real-time collection algorithms in the context of a high-performance execution environment for real-time Java. We provide the first comparison of the two approaches, both experimentally using realistic workloads, and analytically in terms of schedulability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Generic Framework for Soft Real-Time Program Executions on NAND Flash Memory in Multi-Tasking Embedded Systems

    Page(s): 93 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1004 KB) |  | HTML iconHTML  

    This paper proposes a novel technique called mRT-PLRU (multi-tasking real-time constrained combination of pinning and LRU), which forms a generic framework to use inexpensive nonvolatile NAND flash memory for storing and executing real-time programs in multi-tasking environments. In order to execute multiple real-time tasks stored in NAND flash memory with the minimal usage of expensive RAM, the mRT-PLRU is optimally configured in two steps. In the first step, the per-task analysis finds the function of RAM size vs. execution time for each individual task. Using these functions for all the tasks as inputs, the second-step called a stochastic-analysis-in-loop optimization conducts an iterative convex optimization with the stochastic-analysis for the probabilistic schedulability check. As a result, the optimization loop can optimally allocate RAM to multiple tasks such that their deadlines are probabilistically guaranteed with the minimal usage of RAM. Moreover, the mRT-PLRU is optimally configured in a developer-transparent way without giving any burden to the program developer, which is essential for the embedded system industry under a high pressure of time-to-market. The usefulness of the developed technique is intensively verified through both simulation and actual implementation. Our experimental study shows that mRT-PLRU can save up to 80% of RAM required by the industry-common shadowing approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating Proactive and Reactive Approaches for Robust Real-Time Data Services

    Page(s): 105 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB) |  | HTML iconHTML  

    Real-time data services are needed in data-intensive real-time applications such as e-commerce or traffic control. However, it is challenging to support real-time data services, if workloads dynamically change based on the market or traffic status. To enhance the quality of real-time data services even in the presence of dynamic workloads, feedback control theory has been applied. However, a major drawback of feedback control is that it only reacts to performance errors. To improve the robustness of real-time data services, we develop a statistical feed-forward approach that proactively adapts the incoming load, if necessary, to support the desired real-time data service delay. Further, we integrate it with a feedback controller to compensate potential prediction errors and adjust the system behavior in a reactive manner for timely data services. Performance evaluation results acquired in our real-time data service testbed show that our integrated approach considerably reduces the average delay and transient delay fluctuations, while improving throughput compared to the tested baselines including the feed-forward-only and feedback-only approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Scheduling Switch for Maintaining Data Freshness in Flexible Real-Time Systems

    Page(s): 115 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (579 KB) |  | HTML iconHTML  

    Maintaining the temporal validity of real-time data is one of the crucial issues in a real-time database system. Past studies focus on designing algorithms to minimize imposed workload by a fixed set of update transactions while maintaining data freshness within validity intervals. In this paper we revisit this problem by investigating the cost of data freshness maintenance and online scheduling overhead in the presence of mode changes in real-time systems. We propose to apply periodic scheduling policies when the imposed update workload is low to maintain high data freshness. When the update workload becomes high, we propose to switch to more sophisticated algorithms to improve schedulability. In the latter case, not only each scheduling policy must be able to schedule the task set in the corresponding mode, temporal validity must also be maintained during the mode changes. To address this problem, two algorithms, named search-based switch (SBS) and adjustment-based switch (ABS) are proposed to search for the proper switch point online. SBS checks the temporal validity at the beginning time slot of each idle period while ABS further relaxes this restriction through schedule adjustment. Our experimental results demonstrate the correctness and efficiency of these two algorithms. Our results also show that scheduling switch according to the runtime processor workload can significantly outperform a single fixed scheduling policy in terms of data freshness while incurring only limited online switch overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatiotemporal Delay Control for Low-Duty-Cycle Sensor Networks

    Page(s): 127 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (441 KB) |  | HTML iconHTML  

    Data delivery is a major function of sensor network applications. Many applications, such as military surveillance, require the detection of interested events to be reported to a command center within a specified time frame, and therefore impose a real-time bound on communication delay. On the other hand, to conserve energy, one of the most effective approaches is to keep sensor nodes in the dormant state as long as possible while satisfying application requirements. Obviously a node cannot communicate if it is not active. Therefore, to deliver data in a timely manner for such extremely low duty-cycle sensor networks, communication needs to be carefully managed among sensor nodes. In this work, we introduce three different approaches to provide real-time guarantee of communication delay. First, we present a method for increasing duty-cycle at individual node. Then we describe a scheme on placement of sink nodes. Based on previous two methods, we discuss a hybrid approach that shows better balance between cost and efficiency on bounding communication delay. Our solution is global optimal in terms of minimizing the energy consumption for bounding pairwise end-to-end delay. For many-to-one and many-to-many cases, which are NP-hard, we propose corresponding heuristic algorithms for them. To our knowledge, these are the most generic and encouraging results to date in this new research direction. We evaluate our design with an extensive simulation of 5,000 nodes as well as with a small-scale running test-bed on TinyOS/Mote platform. Results show the effectiveness of our approach and significant improvements over an existing solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-Layer Analysis of the End-to-End Delay Distribution in Wireless Sensor Networks

    Page(s): 138 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (610 KB) |  | HTML iconHTML  

    Emerging applications of wireless sensor networks (WSNs) require real-time quality of service (QoS) guarantees to be provided by the network. However, designing real-time scheduling and communication solutions for these networks is challenging since the characteristics of QoS metrics in WSNs are not well known yet. Due to the nature of wireless connectivity, it is infeasible to satisfy worst-case QoS requirements in WSNs. Instead, probabilistic QoS guarantees should be provided, which requires the definition of probabilistic QoS metrics. To provide an analytical tool for the development of real-time solutions, in this paper, the distribution of end-to-end delay in multi-hop WSNs is investigated. Accordingly, a comprehensive and accurate cross-layer analysis framework, which employs a stochastic queueing model in realistic channel environments, is developed. This framework captures the heterogeneity in WSNs in terms of channel quality, transmit power, queue length, and communication protocols. A case study with the TinyOS CSMA/CA MAC protocol is conducted to show how the developed framework can analytically predict the distribution of end-to-end delay. Testbed experiments are provided to validate the developed model. The cross-layer framework can be used to identify the relationships between network parameters and the distribution of end-to-end delay and accordingly, to design real-time solutions for WSNs. Our ongoing work suggests that this framework can be easily extended to model additional QoS metrics such as energy consumption distribution. To the best of our knowledge, this is the first work to investigate probabilistic QoS guarantees in WSNs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • When In-Network Processing Meets Time: Complexity and Effects of Joint Optimization in Wireless Sensor Networks

    Page(s): 148 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    As sensornets are increasingly being deployed in mission-critical applications, it becomes imperative that we consider application QoS requirements in in-network processing (INP). Towards understanding the complexity of joint QoS and INP optimization, we study the problem of jointly optimizing packet packing (i.e., aggregating shorter packets into longer ones) and the timeliness of data delivery. We identify the conditions under which the problem is strong NP-hard, and we find that the problem complexity heavily depends on aggregation constraints (in particular, maximum packet size and re-aggregation tolerance) instead of network and traffic properties. For cases when the problem is NP-hard, we show that there is no polynomial-time approximation scheme (PTAS); for cases when the problem can be solved in polynomial time, we design polynomial time, offline algorithms for finding the optimal packet packing schemes. To understand the impact of joint QoS and INP optimization on sensornet performance, we design a distributed, online protocol emph{tPack} that schedules packet transmissions to maximize the local utility of packet packing at each node. Using a testbed of 130 TelosB motes, we experimentally evaluate the properties of tPack. We find that jointly optimizing data delivery timeliness and packet packing significantly improve network performance. Our findings shed light on the challenges, benefits, and solutions of joint QoS and INP optimization, and they also suggest open problems for future research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Formal Architecture Pattern for Real-Time Distributed Systems

    Page(s): 161 - 170
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (351 KB) |  | HTML iconHTML  

    Pattern solutions for software and architectures have significantly reduced design, verification, and validation times by mapping challenging problems into a solved generic problem. In the paper, we present an architecture pattern for ensuring synchronous computation semantics using the PALS protocol. We develop a modeling framework in AADL to automatically transform a synchronous design of a real-time distributed system into an asynchronous design satisfying the PALS protocol. We present a detailed example of how the PALS transformation works for a dual-redundant system. From the example, we also describe the general transformation in terms of intuitively defined AADL semantics. Furthermore, we develop a static analysis checker to find necessary conditions that must be satisfied in order for the PALS transformation to work correctly. The transformations and static checks that we have described are implemented in OSATE using the generated EMF metamodel API for model manipulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed, Modular HTL

    Page(s): 171 - 180
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (669 KB) |  | HTML iconHTML  

    The Hierarchical Timing Language (HTL) is a real-time coordination language for distributed control systems. HTL programs must be checked for well-formedness, race freedom, transmission safety (schedulability of inter-host communication), and time safety (schedulability of host computation). We present a modular abstract syntax and semantics for HTL, modular checks of well-formedness, race freedom, and transmission safety, and modular code distribution. Our contributions here complement previous results on HTL time safety and modular code generation. Modularity in HTL can be utilized in easy program composition as well as fast program analysis and code generation, but also in so-called runtime patching, where program components may be modified at runtime. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.