By Topic

Real-Time Systems Symposium, 1998. Proceedings., The 19th IEEE

Date 4-4 Dec. 1998

Filter Results

Displaying Results 1 - 25 of 50
  • Proceedings 19th IEEE Real-Time Systems Symposium (Cat. No.98CB36279)

    Publication Year: 1998
    Save to Project icon | Request Permissions | PDF file iconPDF (252 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 1998 , Page(s): 492 - 493
    Save to Project icon | Request Permissions | PDF file iconPDF (182 KB)  
    Freely Available from IEEE
  • Synthesis techniques for low-power hard real-time systems on variable voltage processors

    Publication Year: 1998 , Page(s): 178 - 187
    Cited by:  Papers (53)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    The energy efficiency of systems-on-a-chip can be much improved if one were to vary the supply voltage dynamically at run time. We describe the synthesis of systems-on-a-chip based on core processors, while treating voltage (and correspondingly the clock frequency) as a variable to be scheduled along with the computation tasks during the static scheduling step. In addition to describing the complete synthesis design flow for these variable voltage systems, we focus on the problem of doing the voltage scheduling while taking into account the inherent limitation on the rates at which the voltage and clock frequency can be changed by the power supply controllers and clock generators. Taking these limits on rate of change into account is crucial, since changing the voltage by even a volt may take time equivalent to 100 s to 10000 s of instructions on modern processors. We present both an exact but impractical formulation of this scheduling problem as a set of nonlinear equations, as well as a heuristic approach based on reduction to an optimally solvable restricted ordered scheduling problem. Using various task mixes drawn from a set of nine real life applications, our results show that we are able to reduce power consumption to within 7% of the lower bound obtained by imposing no limit at the rate of change of voltage and clock frequencies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependable system upgrade

    Publication Year: 1998 , Page(s): 440 - 448
    Cited by:  Papers (14)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    The rate of innovations in technologies has far exceeded the rate of adopting them in at least the past 20 years. To fully realize the potential of innovations, a paradigm shift is needed, from a focus on enabling technologies for completely new installations to one which is designed to mitigate the risk and cost of bringing new technologies into functioning systems. In this paper, we show that real time control software can be dependably upgrade online via the use of analytically redundant controllers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general model for recurring real-time tasks

    Publication Year: 1998 , Page(s): 114 - 122
    Cited by:  Papers (12)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    A new model for hard real time tasks-the recurring real time task model-is introduced. This model generalizes earlier models such as the sporadic task model and the generalized multiframe task model. An algorithm is presented for feasibility analysis of a system of independent recurring real time tasks in a preemptive uniprocessor environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time scheduling in a generic fault-tolerant architecture

    Publication Year: 1998 , Page(s): 390 - 398
    Cited by:  Papers (4)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    Previous ultra-dependable real-time computing architectures have been specialised to meet the requirements of a particular application domain. Over the last two years, a consortium of European companies and academic institutions has been investigating the design and development of a Generic Upgradable Architecture for Real-time Dependable Systems (GUARDS). The architecture aims to be tolerant of permanent and temporary, internal and external, physical faults and should provide confinement or tolerance of software design faults. GUARDS critical applications are intended to be replicated across the channels which provide the primary hardware fault containment regions. In this paper, we present our approach to real-time scheduling of the GUARDS architecture. We use an extended response-time analysis to predict the timing properties of replicated real-time transactions. Consideration is also given to the scheduling of the inter-channel communications network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A development framework for ultra-dependable automotive systems based on a time-triggered architecture

    Publication Year: 1998 , Page(s): 358 - 367
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Today by-wire systems are well-known and utilised in the area of aircraft construction. In the last few years there has been an endeavour in the automotive industry to realise by-wire applications without mechanical or hydraulic backup systems in vehicles. The required electronic systems must be highly reliable and cost-effective due to the constraints of mass production. A time-triggered architecture is a new approach that satisfies these requirements. The backbone of communication in this architecture is the fault-tolerant Time-Triggered Protocol (TTP), developed by the Vienna University of Technology and the Daimler-Benz Research. The TTP protocol has been designed due to the class C SAE classification for safety critical control applications, like brake-by-wire or steer-by-wire. For time-triggered architectures a new development process is required to handle the complexity of the systems, accelerate the development and increase the reliability. In this paper we present an approach for the development of distributed fault-tolerant systems based on TTP. The present approach is evaluated by a brake-by-wire case study View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical rate monotonic scheduling

    Publication Year: 1998 , Page(s): 123 - 132
    Cited by:  Papers (34)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (232 KB)  

    Statistical rate monotonic scheduling (SRMS) is a generalization of the classical RMS results of C. Liu and J. Layland (1973) for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular it ensures the important property of task isolation, whereby tasks do not infringe on each other View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deadline-modification-SCAN with maximum-scannable-groups for multimedia real-time disk scheduling

    Publication Year: 1998 , Page(s): 40 - 49
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    Real-time disk scheduling is important to multimedia systems support for digital audio and video. In these years, various approaches are presented to use the seek-optimizing scheme to improve the disk throughput of a real-time guaranteed schedule. However, as these conventional approaches apply SCAN only to the requests with the same deadline or within the same constant-sized group, their improvements are limited. In this paper, we introduce the DM-SCAN (deadline-modification-SCAN) algorithm with an idea of MSG (maximum-scannable-group). The proposed DM-SCAN method can apply SCAN to MSG iteratively by modifying request deadlines. We have implemented the DM-SCAN algorithm on UnixWare 2.01. Experiments show that DM-SCAN is significantly better than that of the best-known SCAN-EDF method in both the obtained disk throughput and the number of supported disk requests View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General data streaming

    Publication Year: 1998 , Page(s): 232 - 241
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    This work presents a new I/O system design and implementation targeted at applications that perform data streaming. The approach yields true zero-copy transfers between I/O devices in many instances. We give a general characterization of I/O elements and provide a framework that allows analysis of the potential for zero-copy transfers. Finally, we describe the design, implementation, and performance of a prototype I/O system in a real time, embeddable, 32-bit operating system whose design is based on the presented analysis to minimize data copying View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical solutions for QoS-based resource allocation problems

    Publication Year: 1998 , Page(s): 296 - 306
    Cited by:  Papers (54)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    The QoS based Resource Allocation Model (Q-RAM) proposed by R. Rajkumar et al. (1998) presented an analytical approach for satisfying multiple quality of service dimensions in a resource constrained environment. Using this model, available system resources can be apportioned across multiple applications such that the net utility that accrues to the end users of those applications is maximized. We present several practical solutions to allocation problems that were beyond the limited scope of Q-RAM. We show that the Q-RAM problem of finding the optimal resource allocation to satisfy multiple QoS dimensions is NP hard. We then present a polynomial solution for this resource allocation problem which yields a solution within a provably fixed and short distance from the optimal allocation. Secondly, Q-RAM dealt mainly with the problem of apportioning a single resource to satisfy multiple QoS dimensions. We study the converse problem of apportioning multiple resources to satisfy a single QoS dimension. In practice, this problem becomes complicated, since a single QoS dimension perceived by the user can be satisfied using different combinations of available resources. We show that this problem can be formulated as a mixed integer programming problem that can be solved efficiently to yield an optimal resource allocation. We also present the run times of these optimizations to illustrate how these solutions can be applied in practice. A good understanding of these solutions will yield insights into the general problem of apportioning multiple resources to satisfy simultaneously multiple QoS dimensions of multiple concurrent applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compiler optimizations for real time execution of loops on limited memory embedded systems

    Publication Year: 1998 , Page(s): 154 - 164
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    We propose a framework to carry out an efficient data partitioning for global arrays on limited on-chip memory embedded systems. The key problem addressed in this work is how to perform a good partitioning of data references encountered in loops between on-chip and off-chip memory to meet the demands of real time response by keeping run time overheads of remote access to a minimum. We introduce a concept of footprint to precisely calculate the memory demands of references at compile time and compute a profit value of a reference using its access frequency and reuse factor. We then develop a methodology based on 0/1 knapsack algorithm to partition the references in the local/remote memory. We show the performance improvements due to our approach and compare the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combining abstract interpretation and ILP for microarchitecture modelling and program path analysis

    Publication Year: 1998 , Page(s): 144 - 153
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Abstract interpretation (AI) and integer linear programming (ILP) are two techniques which were used independently from each other for worst case execution time (WCET) approximation. With AI one can compute interesting properties of programs. It can be implemented efficiently and yields provably correct results. Previous work has shown that it is suitable for cache behaviour prediction of memory references of a program. By using ILP the structure of a program and the program path can be described easily and in a very natural way. A set of constraints describes the overall structure of the program and solving the constraints yields very precise results. However when modelling microarchitectural components like caches or pipelines, the complexity of the solving process can increase dramatically. Our approach uses AI to model the microarchitecture's behaviour and ILP for finding worst case program paths using the results of the Al. This combines the advantages of both approaches View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • From POSIX Threads to Ada to Java: A Brief History of Runtime Development for Some Real-Time Programming Languages

    Publication Year: 1998 , Page(s): 319
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintaining temporal coherency of virtual data warehouses

    Publication Year: 1998 , Page(s): 60 - 70
    Cited by:  Papers (7)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    In electronic commerce applications such as stock trading, there is a need to consult sources available on the web for informed decision making. Because information such as stock prices keep changing, the web sources must be queried continually to maintain temporal coherency of the collected data, thereby avoiding decisions based on stale information. However because network infrastructure has failed to keep pace with ever growing web traffic, the frequency of contacting web servers must be kept to a minimum. This paper presents adaptive approaches for the maintenance of temporal coherency of data gathered from web sources. Specifically, it introduces mechanisms to obtain timely updates from web sources, based on the dynamics of the data and the users' need for temporal accuracy by judiciously, combining push and pull technologies and by using virtual data warehouses to disseminate data within acceptable tolerance to clients. A virtual warehouse maintains temporal coherence, within the tolerance specified, by tracking the amount of change in the web sources, pulling the data from the sources at opportune times, and pushing them to the clients according to their temporal coherence requirements. The performance of these mechanisms is studied using real stock price traces. One of the attractive features of these mechanisms is that it does not require changes to either the web servers or to the HTTP protocol View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbolic schedulability analysis of real-time systems

    Publication Year: 1998 , Page(s): 409 - 418
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    We propose a unifying method for analysis of scheduling problems in real-time systems. The method is based on ACSR-VP, a real-time process algebra with value-passing capabilities. We use ACSR-VP to describe an instance of a scheduling problem as a process that has parameters of the problem as free variables. The specification is analyzed by means of a symbolic algorithm. The outcome of the analysis is a set of equations, a solution to which yields the values of the parameters that make the system schedulable. Equations are solved using integer programming or constraint logic programming. The paper presents specifications of two scheduling problems as examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Isochronous scheduling and its application to traffic control

    Publication Year: 1998 , Page(s): 14 - 25
    Cited by:  Papers (3)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    Existing operating systems and communication protocols cannot achieve high quality video data transmission on an Ethernet, because they lack QoS assurance mechanisms for the shared medium. We have developed a kernel called Tactix to investigate new QoS assurance technologies that enable distributed continuous media applications. In this paper we focus on fixed bit rate video data transfer over an Ethernet, and we propose isochronous scheduling and its application to software traffic shaping. Furthermore, we present the results of measuring the service quality achieved by these technologies, which we obtained using ordinary personal computers and a shared mode 100-Mbps Ethernet. These indicate that the technologies enable multiple video streams (up to a total bandwidth of about 60 Mbps) and non-real-time background traffic to coexist on an Ethernet with a very low packet loss ratio and a transmission delay of less than a few milliseconds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A better polynomial-time schedulability test for real-time multiframe tasks

    Publication Year: 1998 , Page(s): 104 - 113
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    The well known real time periodic task model first studied by C.L. Liu and J.W. Layland (1973) assumes that each task τ has a worst case computation time C and each execution (instance) of the task takes no more than C time units. Based on the worst case computation time assumption, Liu and Layland derived a utilization bound under which all task sets are schedulable by the fixed priority scheduling scheme. The assumption and the derived utilization bound are, however too pessimistic when the average computation times of the tasks are smaller than their worst case computation times. To improve the schedulability test for such task sets, A.K. Mok and D. Chen (1996) proposed a multiframe task model for characterizing real time tasks whose computation times vary instance by instance. They also derived an improved utilization bound for multiframe task sets. Although Mok and Chen's utilization bound is better than Liu and Layland's bound, it is still too pessimistic in the sense that a lot of feasible task sets may not be found schedulable using their utilization bound. C.-C. Han and H. Yion Tyan (1997) proposed a new, better polynomial time schedulability test for periodic task model. They found that a similar technique can be applied to the multiframe task model. We discuss how the previously proposed schedulability test can be modified for a multiframe task model. We also show that our schedulability test is much better than using Mok and Chen's utilization bound by giving theoretical reasoning and presenting thorough performance evaluation results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of a real-time ATM-based protocol server

    Publication Year: 1998 , Page(s): 242 - 252
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    The paper describes the design and implementation of L4ATM, an ATM (asynchronous transfer mode) based networking server. While ATM emphasizes deterministic high speed communication, applications can not yet fully utilize its potential. We demonstrate an architecture-and a corresponding implementation-to resolve this dilemma by developing implementable resource quantification techniques and QoS (Quality Of Service) management algorithms for host resources. L4 ATM has been built in the context of DROPS (Dresden Real-Time Operating System). DROPS supports coexisting real time and time sharing applications in a μkernel environment. Evaluating L4ATM's implementation in a real world environment, we show that: (i) performance guarantees are maintained under heavy time sharing load, and (ii) the implementation outperforms a standard OS significantly View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic testing of reactive systems

    Publication Year: 1998 , Page(s): 200 - 209
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    The paper addresses the problem of automatizing the production of test sequences for reactive systems. We particularly focus on two points: (1) generating relevant inputs, with respect to some knowledge about the environment in which the system is intended to run; (2) checking the correctness of the test results, according to the expected behavior of the system. We propose to use synchronous observers to express both the relevance and the correctness of the test sequences. In particular, the relevance observer is used to randomly choose inputs satisfying temporal assumptions about the environment. These assumptions may involve both Boolean and linear numerical constraints. A prototype tool called LURETTE has been developed and experimented with, which works on observers written in the LUSTRE programming language View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using light-weight groups to handle timing failures in quasi-synchronous systems

    Publication Year: 1998 , Page(s): 430 - 439
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB)  

    In a quasi-synchronous environment worst-case times associated with a given activity are usually much higher than the average time needed for that activity. Using always those worst-case times can make a system useless. However not using them may lend to timing failures. On the other hand, fully synchronous behavior is usually restricted to small parts of the global system. In a previously defined architecture we use this small synchronous part to control and validate the other parts of the system. In this paper we present a light-weight group protocol that together with the previously defined architecture makes it possible to efficiently handle timing failures in a quasi-synchronous system. This is specially interesting when active replication is used. It provides application support for a fail-safe behavior or controlled (timely and safe) switching between different qualities of service View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating multimedia applications in hard real-time systems

    Publication Year: 1998 , Page(s): 4 - 13
    Cited by:  Papers (155)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    This paper focuses on the problem of providing efficient run-time support to multimedia applications in a real-time system, where two types of tasks can coexist simultaneously: multimedia soft real-time tasks and hard real-time tasks. Hard tasks are guaranteed based on worst case execution times and minimum interarrival times, whereas multimedia and soft tasks are served based on mean parameters. The paper describes a server-based mechanism for scheduling soft and multimedia tasks without jeopardizing the a priori guarantee of hard real-time activities. The performance of the proposed method is compared with that of similar service mechanisms through extensive simulation experiments and several multimedia applications have been implemented on the HARTIK kernel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A worst case timing analysis technique for multiple-issue machines

    Publication Year: 1998 , Page(s): 334 - 345
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    We propose a worst case timing analysis technique for in-order multiple-issue machines. In the proposed technique, timing information for each program construct is represented by a directed acyclic graph (DAG) that shows dependences among instructions in the program construct. From this information, we derive for each pair of instructions the distance bounds between their issue times. Using these distance bounds, we identify the sets of instructions that can be issued at the same time. Deciding such instructions is an essential task in reasoning about the timing behavior of multiple-issue machines. In order to reduce the complexity of analysis, the distance bounds are progressively refined through a hierarchical analysis over the program syntax tree in a bottom-up fashion. Our experimental results show that the proposed technique can predict the worst case execution times for in-order multiple-issue machines as accurately as ones for simpler RISC processors View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Task period selection and schedulability in real-time systems

    Publication Year: 1998 , Page(s): 188 - 198
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    In many real time applications, especially those involving computer controlled systems, the application tasks often have a maximal acceptable latency, and small latency is preferred to large. The interaction between choosing task periods to meet the individual latency requirements and scheduling the resulting task set was investigated by D. Seto et al. (1996) using dynamic priority scheduling methods. We present algorithms based on static priority scheduling methods to determine optimal periods for each task in the task set. The solution to the period selection problem optimizes a system wide performance measure, subject to meeting the maximal acceptable latency requirements of each task. The paper also contributes to a new aspect of rate monotonic scheduling, the optimal design of task periods in connection with application related timing specifications and task set schedulability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Schedulability analysis for tasks with static and dynamic offsets

    Publication Year: 1998 , Page(s): 26 - 37
    Cited by:  Papers (82)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB)  

    In this paper we present an extension to current schedulability analysis techniques for periodic task with offsets, scheduled under a preemptive fixed priority scheduler. Previous techniques allowed only static offsets restricted to being smaller than the task periods. With the extension presented in this paper, we eliminate this restriction and we allow both static and dynamic offsets. The most significant application of this extension is in the analysis of multiprocessor and distributed systems. We show that we can achieve a significant increase of the maximum schedulable utilization by using the new technique, as opposed to using previously known worst-case analysis techniques for distributed systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.