By Topic

Distributed Interactive Simulation and Real-Time Applications, 1998. Proceedings. 2nd International Workshop on

Date 20-20 July 1998

Filter Results

Displaying Results 1 - 13 of 13
  • Proceedings. 2nd International Workshop on Distributed Interactive Simulation and Real-Time Applications (Cat. No.98EX191)

    Publication Year: 1998
    Save to Project icon | Request Permissions | PDF file iconPDF (152 KB)  
    Freely Available from IEEE
  • On the development of a generic interface for HLA and DIS simulations

    Publication Year: 1998 , Page(s): 52 - 59
    Cited by:  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1560 KB)  

    Discusses the development of a generic interface that will enable traditional non-object-oriented applications to become HLA (High Level Architecture) compliant. By providing for both DIS (distributed interactive simulation) and HLA, the proposed interface can support not only new but existing or legacy software applications as well. In addition, the interface will make it easy for simulation applications that today require a standard networking protocol as a backbone to use DIS components in place of corresponding HLA components as an interim solution. After a general overview of the concept of a generic interface, the paper then proceeds with a description of the interface components and how they can be used to harmonize the various applications involved in a simulation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical insights into the process of extending a federation-a review of the High Level Architecture Command and Control Experiment

    Publication Year: 1998 , Page(s): 41 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    The High Level Architecture (HLA) Command and Control (C2) Experiment was a Defense Modeling and Simulation Office (DMSO) funded effort to evaluate the efficacy of the Modular Reconfigurable C4I Interface (MRCI) in the context of HLA-compliant modeling and simulation. The HLA C2 Experiment was a coordinated effort of HLA technology exploration and MRCI development. The experiment focused on three primary investigations: (1) the inclusion of real-world C2 entities as federates in an existing HLA federation; (2) the extendibility and reuse of an HLA federation, and (3) the accuracy of the federation development and execution process (FEDEP) as a guide for federation design and development. The HLA C2 team crafted and used the experiment requirements, flow and influence process as a visualization of the processes used for defining and designing the experiment and for identifying the relationships among these processes. This paper presents how the sponsor's needs and objectives were carried forward in and influenced the design of the HLA C2 federation system within the experiment. It also demonstrates the relationships and influences each experiment phase had on the flow of activities, beginning with the identification of the customer needs, through to the finalization of the system and study/experiment designs. It focuses on the key HLA C2 Experiment's federation design and integration aspects and reports the investigations' associated insights/lessons learned View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An SMP-based, low-latency, network interface unit and latency measurement system: the SNAPpy system

    Publication Year: 1998 , Page(s): 62 - 70
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    Latency is an explicit and possibly insidious issue associated with distributed interactive simulations (DISs), whether they are legacy simulations or simulations based on the DIS protocol or on the HLA (High Level Architecture) and RTI (Run-Time Infrastructure) standards. The implementation details of individual simulations, whether located entirely in one room or geographically distributed across a continent, will directly impact the latencies between various elements of the simulation. At SYSTRAN Corp., we have developed a Windows NT-based system, called SNAPpy (Simulation Network Analysis Project), that seeks to provide tools to both measure and reduce the latency associated with DISs. SNAPpy includes two major sections: an NIU (network interface unit) that is multi-threaded and built on an SMP (symmetric multiprocessor) and Windows NT platform, and the SNAP latency measurement section. This paper reviews the architecture of SNAPpy, going into specific details about its make-up and performance. Finally, SNAPpy's potential as a translator of legacy simulations into the HLA/RTI domain is described, and we introduce the SNAP-Lite Latency Measurement System, a new SYSTRAN product View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation composability for JSIMS

    Publication Year: 1998 , Page(s): 4 - 14
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    Examines the technology of composability in simulation systems. Composability refers to the ability of a simulation to be flexibly configured to adapt to a range of missions, scenarios, simulation models, hardware environments and security configurations. Composability confers maximum flexibility to the usage of the simulation. Simulation composability is a requirement of the Joint Simulation System (JSIMS). JSIMS is currently being developed by the US Department of Defense and is intended to deliver commander and command staff training. This paper examines simulation composability from the JSIMS perspective and explores the overall technical approach and related issues View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High Level Architecture for simulation: an update

    Publication Year: 1998 , Page(s): 32 - 40
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    The High Level Architecture (HLA) provides the specification of a common technical architecture for use across all classes of simulations in the US Department of Defense. It provides the structural basis for simulation interoperability. The baseline definition of the HLA includes the HLA rules, the HLA interface specification and the HLA object model template (OMT). The HLA rules are a set of 10 basic rules that define the responsibilities and relationships among the components of an HLA federation. The HLA interface specification provides a specification of the functional interfaces between HLA federates and the HLA runtime infrastructure. The HLA OMT provides a common presentation format for HLA simulation and federation object models. This paper provides a description of the development of the HLA, a technical description of the key elements of the architecture and a discussion of HLA implementation, including HLA supporting processes and software View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The distributed interactive simulation (DIS) lethality communication server

    Publication Year: 1998 , Page(s): 82 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (28 KB)  

    A new approach to handling battle simulation lethality is presented. In this approach, a single server provides standard DIS damage states to entities fast enough for most real-time applications. Benefits include freeing DIS simulations from the burden of maintaining damage state tables, lower DIS pre-exercise preparation and easier scenario configuration as a whole. These benefits are realized primarily because efforts to prepare and maintain the vulnerability data pertaining to the exercise are not duplicated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Latency measurements obtained from the Simulation Network Analysis Project

    Publication Year: 1998 , Page(s): 71 - 81
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (88 KB)  

    Simulator time delays (latencies) are an important factor in the simulation world. In research and/or training, any high-fidelity simulation is adversely affected by latencies. The Simulation Network Analysis Project (SNAP) was developed to investigate these latencies. The SNAP system can measure latencies between vital points (stick input, state variables, visual displays and the network interface unit-or any other points of interest) in a standalone simulator and between networked simulators. Data correlation is accomplished via Global Positioning System (GPS) time-stamping. This paper reports on the findings from past latency measurements and key lessons learned. Factors affecting latency, such as network configuration (hardware and software), simulator modifications and network loading are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Implicitly Scalable, Fully Interactive Multimedia Storage Server

    Publication Year: 1998 , Page(s): 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A network architecture for remote rendering

    Publication Year: 1998 , Page(s): 88 - 91
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    Internet-based virtual environments (VEs) let users explore multiple virtual worlds with many different geometric models, which are downloaded rather than pre-distributed. To avoid long download times, we have developed a method that optimally utilizes network bandwidth by downloading only the exact portion of geometry that is necessary for rendering. The solution is based on progressive geometry data structures (smooth levels of detail) and on selective downloading View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of a dead reckoning mechanism

    Publication Year: 1998 , Page(s): 23 - 29
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    A dead reckoning mechanism allows one to reduce network bandwidth utilization considerably. As available bandwith is the key problem for large distributed interactive simulation (DIS) systems, we expect that a dead reckoning mechanism will allow us to increase the number of entities involved in a DIS exercise. In this paper, we model a dead reckoning mechanism and we present an evaluation of its performance. Such a mechanism reduces the number of program data units (PDUs) exchanged during the simulation, but it also completely changes the nature of the stochastic process which models the arrival of PDUs. Indeed, it adds some kind of sporadicity. We present some measurements on a DIS simulator which exhibits such a behaviour. Then we investigate the effects of such a mechanism in terms of response time and the number of allowed entities using a simulation tool. Finally, some extensions towards the numerical resolution of a large Markov chain associated with the problem are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A central control engine for an open and hybrid simulation environment

    Publication Year: 1998 , Page(s): 15 - 22
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Embedded control systems are increasing in complexity. This paper describes a modelling and simulation environment which can efficiently handle the horizontal and vertical integration of both the hardware and the software design process. The focus is on simulation and its control aspects, detailing the respective algorithms. The work is conducted within the OMI/TOOLS project funded by the European Community View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An implicitly scalable, fully interactive multimedia storage server

    Publication Year: 1998 , Page(s): 92 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB)  

    We are developing a next-generation multimedia server that provides fully interactive access to tremendous amounts and varieties of real-time and non-real-time multimedia data by hundreds of simultaneous clients. Current multimedia servers are inadequate for this task, given their support of only basic multimedia data types, inherently non-interactive access semantics and/or intrinsic scaling limitations. Our solution abandons the common use of striping and object replication, and implements a random data allocation scheme across a cluster of commodity computers. This scheme provides implicit load balancing both within and among storage nodes of the cluster while supporting virtually any multimedia data type and application access pattern. This paper presents the essential background, design and implementation, and simulation studies of the storage server component of our system. Our results show that we can guarantee with high probability that an arbitrary I/O requests can be satisfied within a small delay bound while obtaining high system utilization. Although our specific application is a real-time multimedia storage server, the techniques developed can be applied to scalability in distributed systems in general View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.