By Topic

Computer Software and Applications Conference, 1998. COMPSAC '98. Proceedings. The Twenty-Second Annual International

Date 21-21 Aug. 1998

Filter Results

Displaying Results 1 - 25 of 103
  • Proceedings. The Twenty-Second Annual International Computer Software and Applications Conference (Compsac '98) (Cat. No.98CB 36241)

    Publication Year: 1998
    Save to Project icon | Request Permissions | PDF file iconPDF (74 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 1998 , Page(s): v - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (445 KB)  
    Freely Available from IEEE
  • Panel Discussion On Real-time Systems

    Publication Year: 1998 , Page(s): 338 - 342
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Panel On Software Component Architectures

    Publication Year: 1998 , Page(s): 596
    Save to Project icon | Request Permissions | PDF file iconPDF (6 KB)  
    Freely Available from IEEE
  • Author index

    Publication Year: 1998 , Page(s): 649 - 651
    Save to Project icon | Request Permissions | PDF file iconPDF (322 KB)  
    Freely Available from IEEE
  • Architecture of ROAFTS/Solaris: a Solaris-based middleware for real-time object-oriented adaptive fault tolerance support

    Publication Year: 1998 , Page(s): 90 - 98
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Middleware implementation of various critical services required by large scale and complex real time applications on top of COTS operating system is currently an approach of growing interests. Its main goal is to enable significant reduction application system design separating the concerns of the application designer for the application functionality from the concerns for application independent system issues. The paper presents the middleware architecture named the Real-time Object-oriented Adaptive Fault Tolerance Support (ROAFTS) and a prototype implementation ROAFTS/Solaris realized on top of both a COTS operating systems, Solaris, and a COTS CORBA complaint ORB, Orbix. ROAFTS supports distributed real time applications, each structured as a network of Time-triggered Message-triggered Objects (TMOs), and the TMO is a major extension of a conventional object for use in hard real time applications. The major components of ROAFTS include a TMO support manager for supporting the execution of TMO's, a generic fault tolerance server, and a network surveillance manager (NSM) which provides the generic fault tolerance server with fast fault detection notices. The generic fault tolerance server and the NSM themselves have been structured as TMO's. A discussion on an effective use of CORBA standards for moderate precision real time applications to run on COTS operating systems is also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic refinement of distributed systems specifications using program transformations

    Publication Year: 1998 , Page(s): 154 - 163
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    Formal specification techniques and automatic refinement tools for distributed systems have become key issues in current computing technology. The paper reports the development of a refinement tool based on the Extended State Transition Language (Estelle). Estelle is a format description technique (FDT) for distributed systems and communication protocols standardized by ISO. The refinement approach addresses an OO execution metamodel which is instantiated using C++. Transformations are used as the main technology behind the construction of this tool View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Framework-oriented analysis

    Publication Year: 1998 , Page(s): 324 - 329
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    Recently object oriented frameworks have become popular. Application development using frameworks still need analysis, design, coding and testing. The paper presents analysis techniques, Framework-Oriented Analysis (FOA), for application development using frameworks and related techniques. The FOA is an extension of object oriented analysis (OOA), but it explores the reuse of software architecture, design, code and test cases of frameworks. The key features of the FOA include comparison based analysis, feature comparison, hierarchical framework/application (F/A) scenario diagrams, cross reference checking between framework and application objects and methods, multi-level requirement checking View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A solution to the distributed mutual exclusion problem in k-groups

    Publication Year: 1998 , Page(s): 302 - 307
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20 KB)  

    The authors consider a distributed system, DS, which consists of a collection of k distinct groups. Each group is a set of processes which have distinct identities, and overlapped groups have at least one process in common. The concept of groups would be appropriate in situations in which processes belong to the same group share a particular property. Moreover, k resources are available, each one in a unique instance and is assigned to one group at the same time. Throughout the paper they describe an algorithm to cope with the problem of the distributed mutual exclusion within the system DS. This algorithm makes use of a logical structure called k-rooted trees and k tokens. They study the performance of this algorithm in terms of the number of messages exchanged per critical section. They discuss process failures and loss of tokens on the algorithm and propose method for recover from these failures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software engineering for the scalable distributed applications

    Publication Year: 1998 , Page(s): 285 - 292
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (72 KB)  

    A major problem in the development of distributed applications is that one cannot assume that the environment in which the application is to operate will remain the same. This means that developers must take into account that the application should be easy to adapt, A requirement that is often formulated imprecisely is that an application should be scalable. The authors concentrate on scalability as a requirement for distributed applications, what it actually means, and how it can be taken into account during system design and implementation. They present a framework in which scalability requirements can be formulated precisely. In addition, they present an approach by which scalability can be taken into account during application development. Their approach consists of an engineering method for distributing functionality, combined with an object-based implementation framework for applying scaling techniques such as replication and caching View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Provably efficient non-preemptive task scheduling with Cilk

    Publication Year: 1998 , Page(s): 602 - 607
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (20 KB)  

    We consider the problem of scheduling static task graphs by using Cilk, a C based runtime system for multithreaded parallel programming. We assume no preemption of task execution and no prior knowledge of the task execution times. Given a task graph G, the output of the scheduling algorithm is a Cilk program P which, when executed initiates the tasks in consistence with the precedence requirements of G. We show that the Cilk model has restrictions in implementing optimal schedules for certain types of task graphs; however, the restriction does not fundamentally hinder the practical applications of Cilk, as it is possible to produce reasonably good quality schedules (in the sense of expected execution time). Our algorithm identifies a minimal number of stages, assigns tasks to these stages, and bundles parallel tasks of the same stage into one Cilk procedure. By using Tarjan's algorithm (for set operations) to implement the bundling process, we demonstrate that the parallel schedule can be derived in O(n+e) time for all practical purposes, where n and e denote the number of nodes and edges in the task graph G. With P processors, the expected completion time for the scheduled tasks is bounded by Tp=O(T1/P+S), where T1 denotes the total work, i.e., the time required for executing all tasks on a single processor, and S denotes the sum (over all stages) of the longest execution time of the tasks at each stage. When the execution times of tasks are relatively homogeneous, the quality of the schedule generated by using our approach is nearly optimal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Euro-conversion and Year 2000: a review of the project situation

    Publication Year: 1998 , Page(s): 525 - 526
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (16 KB)  

    In 1999 and 2000, major business events such as European monetary union (EMU) and the Year 2000 problem will force organizations, in those situations where IT is critical to business survival, to set up and adequately fund relevant projects. The ability of the organization's IT departments to deliver timely and appropriate solutions will be critical to those companies. While most companies and governmental organizations can use a period until 2002 to become Euro-compliant, financial industries such as banks especially face an extreme time pressure, since they need to be Euro-compliant by the beginning of 1999. There is no single best technical or business approach to making a system Euro-compliant. Based on the current situation in each organization, companies must choose from a wide range of strategies and approaches with different costs and benefits. Our experiences in Euro projects have shown that the overall goal of a Euro-conversion project is to choose the most appropriate mix of technical and business solutions to implement timely Euro-compliance. Costs have been shown to be of minor importance in this context View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal specification and simulation of software through graph grammars: a general but minimal approach

    Publication Year: 1998 , Page(s): 148 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    High quality software components require a representation that allows the implementation-independent description of the structure and behavior of software components. Hence, the static as well as the dynamic structure of the system has to be represented in a structured way. Graph transformation systems support static and dynamic modeling through a single computational framework for the sake of correctness, maintainability, and integrity. The framework introduced along with the corresponding tool, UPGraDE (Universal Programmed Graph Grammar Development Environment), which is based on the universal graph language GRASP (GRAph grammar with Set Productions). Any type of system can be specified through a minimal set of operations (syntax) and rules to specify the behavior of any type of software (semantics). The UPGraDE Environment, consisting of several totally transparent interconnected modules, performing well defined tasks, is a highly modular and extensible environment suited for nearly every GRASP development purpose View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security and the World Wide Web

    Publication Year: 1998 , Page(s): 260
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application development process-a pipeline framework

    Publication Year: 1998 , Page(s): 4 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Application development process involves a set of activities to produce an application. Multiple dependencies among multiple activities can affect the development time of the overall application. These activities can be represented as process phases such as analysis, design, implementation, testing and so on. While some development approaches treat each phase to start after completion of the previous phase; others attempt to start a phase before completion of the previous phase. However, there is no clear methodology to follow to start the phases of development at the earliest possible time. We propose a framework for the application development process, where communication, dependency, and relative time between phases are important factors. The methodology proposed to start the activities of application development transforms the development process into a pipeline structure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Challenges in data management for the United States Department of Defense (DoD) command, control, communications, computers, and intelligence (C4I) systems

    Publication Year: 1998 , Page(s): 622 - 629
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    This paper explores challenges facing data administrators, database engineers, and knowledge-base developers in the management of information in the United States (U.S.) Department of Defense (DoD), particularly in the information systems utilized to support Command, Control, Communications, Computers, and Intelligence (C4I). These information systems include operational tactical systems, decision-support systems, modeling and simulation systems, and non-tactical business systems, all of which can affect the design, operation, interoperation, and application of C4I systems. Specific topics include issues in integration and interoperability, joint standards, data access, data aggregation, information-system component reuse and legacy systems. Broad technological trends, as well as the use of specific developing technologies will be discussed in light of how they may enable the U.S. DoD to meet the present and future data-management challenges View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of object-orientation for industrial usage

    Publication Year: 1998 , Page(s): 647 - 648
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    We have a few number of pilot projects in Object-Oriented Development (OOD) for small scale systems whose sizes are around 40 to 80 thousand lines of code in C++. We believe OOD should contribute in the software productivity improvement. But we found some issues in OOD which should be more improved for industrial environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Allocating data objects to multiple sites for fast browsing of hypermedia documents

    Publication Year: 1998 , Page(s): 406 - 411
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Many World Wide Web applications require access to and the transfer and synchronization of large multimedia data objects (MDOs), such as audio, video and images, across the communication network. The transfer of large MDOs contributes to the response time observed by the end users. As the end users expect strict adherence to response time constraints, the problem of allocating these MDOs so as to minimize response time becomes very challenging. The problem becomes more complex in the context of hypermedia documents (Web pages), wherein these MDOs need to be synchronized during presentation to the end users. Since the basic problem of data allocation in distributed database systems is NP-complete, a need exists to pursue and evaluate solutions based on heuristics for generating near-optimal MDO allocations. In this paper, we (i) conceptualize this problem by using a navigational model to represent hypermedia documents and their access behavior by end users, (ii) formulate the problem by developing a base case cost model for response time, (iii) design two algorithms to find near-optimal solutions for allocating MDOs of the hypermedia documents while adhering to the synchronization requirements, and (iv) evaluate the trade-off between the time complexity to obtain the solution and the quality of the solution by comparing the algorithms' solutions with an exhaustive solution over a set of experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maintaining execution histories for understanding the execution of business processes

    Publication Year: 1998 , Page(s): 528 - 533
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB)  

    As database and workflow technologies are used to manage business processes, decision-makers of enterprises must query the execution of business processes to understand and refine these processes for expected throughput and quality. Introducing the representation of business processes in the database schema allows the execution histories of business processes to be maintained in the database for supporting queries on how business processes are executed. In this work, we incorporate finite state machines into the entity-relationship (ER) model for representing business processes in the database schema. Analytical systems can be developed in the proposed representation to assist decision-makers in observing the business processes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolving the airborne warning and control system (AWACS)

    Publication Year: 1998 , Page(s): 364 - 367
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    We give an overview of ongoing development efforts and supporting research which is evolving the United States Air Force's Airborne Warning and Control System (AWACS). AWACS is an airplane-based sensor and command and control platform with an on-board mission computing system that presents operators with sensor and other information subject to real-time quality of service requirements. The legacy mission computing system is mainframe-based and relies on a centralized, cyclic scheduling approach which makes maintenance and improvement very difficult. We briefly discuss interesting aspects of our efforts to upgrade this system to use emerging real-time distributed object technology, object management, and modern scheduling and schedulability analysis. Advance demonstrations of this upgrade have been very successful, and test flights should be underway at the time of the conference. Within this context, we give examples of technology transfer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Code synthesis based on object-oriented design models and formal specifications

    Publication Year: 1998 , Page(s): 393 - 398
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    Presents an approach to synthesizing functional and robust code from object-oriented design models and Z data and operation specifications. The approach used is based on an integrated notation of the Unified Modeling Language (UML) and a slightly extended Z notation to include object-oriented concepts and structures. Our approach generates fully functional code which can be compiled and executed without modifications. The information from object-oriented analysis and design models along with the formal specifications are combined, analyzed and translated into an intermediate representation from which code can be generated. A research prototype has been developed to demonstrate the feasibility and the effectiveness of our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards component-based software engineering

    Publication Year: 1998
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB)  

    The software community faces a major challenge that is raised by fast growing demand for rapid and cost effective development and maintenance of large scale and complex software systems. To overcome the challenge, the new trend is to adopt component based software engineering (CBSE). The key difference between CBSE and traditional software engineering is that CBSE views a software system as a set of off-the-shelf components integrated within an appropriate software architecture. CBSE promotes large scale reuse, as it focuses on building software systems by assembling off-the-shelf components rather than implementing the entire system from scratch. CBSE also emphasizes selection and creation of software architectures that allow systems to achieve their quality requirements. As a result, CBSE has introduced fundamental changes in software development and maintenance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing situationally specific methods through stakeholder collaboration

    Publication Year: 1998 , Page(s): 179 - 185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    Software development methods can only be used effectively where there is a close match between the method being used and the situational application. There are two key features that need to be considered by those concerned with formalising the development of situational methods: (i) the stakeholder input and (ii) the method engineering process. The method presented, MEWSIC (Method Engineering With Stakeholder Input and Collaboration), formalises the development of situational methods so that links to quality assurance processes is retained. MEWSIC accounts for the number of stakeholders who have a legitimate interest in the success of the project but distinguishes between those who provide input that informs the method engineering process and those who carry out this process. A description of MEWSIC is given bringing out the collaborative nature of the approach. The authors then discuss MEWSIC's place within software engineering (particularly in relation to method engineering approaches and quality assurance mechanisms) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component-based integrated systems development: a model for the emerging procurement-centric approach to software development

    Publication Year: 1998 , Page(s): 128 - 135
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (64 KB)  

    The continuing increase of interest in Component-based Software Engineering (CBSE) signifies the emergence of a new development trend within the software industry. Unlike preceding software engineering models, CBSE heavily relies on the utilization of commercial off-the-shelf (COTS) products as the underlying foundation for new product development. Its emphasis is on the acquisition of reusable products to develop complex integrated solutions over their development from scratch. Compared to traditional development-centric approaches, CBSE promises a more efficient and effective means to deliver software solutions to the market. However, underestimating the risks associated with the acquisition of these software components has resulted in longer schedule delay and higher development/maintenance cost. The paper describes a procurement-centric model that we have utilized to effectively support the development of a CBSE project at the Mitsubishi Consumer Electronics Engineering Center (CEEC). The Component-based Integrated Systems Development (CISD) model identifies key engineering phases and their sub-phases that are often ignored, or merely implicit, in existing development-centric models. The paper also presents the various lessons learned implementing CBSE at the CEEC View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Binding object models to source code: an approach to object-oriented re-architecting

    Publication Year: 1998 , Page(s): 26 - 31
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    Object-oriented re-architecting (OORA) concerns identification of objects in procedural code with the goal to transform a procedural into an object-oriented program. We have developed a method to address the problem of object identification from two different directions: 1) building an object model of the application based on system documentation to ensure the creation of application-semantic classes; and 2) analyzing the source code to identify potential class candidates on the basis of compound data types and data flow analysis. Object model classes are bound to class candidates to prepare a forward biased and thus semantically meaningful program transformation at the source code level. In this paper; we define a similarity measure for classes to enables the binding process. We also describe the constraints and benefits of human intervention in this process. We have applied this method to a real-world embedded software system to identify potential classes; results from the case study are given in the paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.