By Topic

Simulation Symposium, 2001. Proceedings. 34th Annual

Date 26-26 April 2001

Filter Results

Displaying Results 1 - 25 of 44
  • Proceedings. 34th Annual Simulation Symposium

    Save to Project icon | Request Permissions | PDF file iconPDF (223 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 341
    Save to Project icon | Request Permissions | PDF file iconPDF (62 KB)  
    Freely Available from IEEE
  • Evolving the Web-based distributed SI/PDO architecture for high-performance visualization

    Page(s): 151 - 158
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB) |  | HTML iconHTML  

    We have extended the SI/PDO architecture to allow Web access to visualization tools running on MP systems. We make these tools more easily accessible by providing Web-based interfaces and by shielding the user from the details of these computing environments. We use a multi-tier architecture, where the Java-based GUI tier runs on a Web browser and provides image display and control functions. The visualization tier runs on MP machines. The middle tiers provide custom communication with MP machines, remote file selection, remote launching of services, and load balancing. The system allows for adding and removing of tiers depending upon the situation. This architecture is based on the requirements of our environment: huge data volumes (that cannot be easily moved), use of multiple middleware protocols, MP platform portability, rapid development of the visualization tools, distributed resource management (of MP resources), and the use of existing visualization tools View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient concurrent simulation of large networks using various fault models

    Page(s): 51 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    An improved fault simulation environment is described, and details of its faulting capability are presented and tested. The versatility of our fault simulator in handling different fault models by adding new activity functions to our modeling structure such as n-terminal bridge faults is shown. We use our TUFTsim simulator, which is based on concurrent simulation algorithms to efficiently fault-simulate large networks, and the multiple list traversal mechanism handles the propagation of concurrent elements through the topology. Results on some of the ITC '99 benchmarks are shown based on the stuck-at and logical bridge fault models. The use of the bridge fault in combination with the stuck-at model does not depreciate the simulation efficiency View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building a Web-based federated simulation system with Jini and XML

    Page(s): 143 - 150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB) |  | HTML iconHTML  

    In a Web-based federated simulation system, a group of simulation models residing on different machines attached to the Internet, called federates, collaborate with each other to accomplish a common task of simulating a complex real-world system. To reduce the cost of developing and maintaining simulation models and facilitate the process of building complex collaborative simulation systems, reuse of existing simulation models and interoperability between disparate simulation models are of paramount importance. Moreover to make such a system highly extensible, the individual federates, which could reside on the same host or physically distributed hosts, should be able to freely join and leave a federation without full knowledge of its peer federates. Simply put, an ideal simulation system should allow for quick and cheap assembly of a complex simulation out of independently developed simulations and at the same time allow the participating simulations to have maximum independence. Fortunately this is made possible by some emerging Jini technologies, notably Jini and the Extensible Markup Language (XML). We introduce Jini and XML and present the design and prototype implementation of a Web-based federated simulation system using Jini and XML View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of the ATLAS language in models of urban traffic

    Page(s): 311 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB) |  | HTML iconHTML  

    ATLAS is a specification language defined to outline city sections to model and simulate traffic flow. Streets are characterized by their size, direction, number of lanes, etc. Once the urban section is outlined, the constructions are translated into Cell-DEVS models, and the traffic flow is automatically set up. As modelers can focus on the problem to solve, development times for a simulation can be highly reduced. We present an example of application of the specification language to solve specific problems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of VRML in collaborative simulations for the petroleum industry

    Page(s): 319 - 324
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    This paper shows the application of the Virtual Reality Modeling Language (VRML) in petroleum industry simulations. The research already accomplished by GRVa/LAMCE (Applied Virtual Reality Group of the Computational Methods Laboratory in Engineering) is shown, ratifying the utilization as an important tool in several areas where Web utilization is available. Also VR concepts, advantages and applicability, besides a set of solutions/applications already developed specifically for the petroleum industry in a collaborative environment are shown View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Applications of dynamic data flow programming to real-time interactive simulations

    Page(s): 251 - 257
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    Within the domain of soft real-time interactive simulations the timeliness of a computation can be more vital than its accuracy in ensuring the success of the overall system. At the same time, designers wish to provide the most accurate answers possible within the given time constraints. Time complexity-based multilevel modeling provides an avenue for satisfying this trade-off. However, implementing and managing such models in a traditional, third-generation language can dramatically increase the overhead and complexity of the code while limiting portability and code reuse. Many of these issues can be resolved by architecting the overall system around a data flow perspective rather than control flow. The SHADOW system, a language, compiler and run-time engine built around data flow concepts, has been used to demonstrate the viability of these concepts in both structured laboratory tests and large-scale applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving the execution of groups of simulations on a cluster of workstations and its application to storage area networks

    Page(s): 227 - 234
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB) |  | HTML iconHTML  

    Parallel simulation methods can be used to reduce the execution time of simulations of complex systems. This approach is being used to improve the execution time of a storage area network (SAN) simulator designed in our department. From our experience in planning simulation experiments, we have realized that, in most cases, a simulation experiment (group of simulations) is executed while varying only one input variable, which usually corresponds to the input, workload or a configuration model parameter. We propose two methods to reduce the overall execution time of a simulation experiment using a cluster of workstations. The first method uses the first simulation in order to tune the rest of the remaining work to be done in the experiment. The second method, based in the first one, tries to minimize the negative influence of the initial transient period by chaining the simulations in the experiment. We show that these two methods noticeably decrease the overall execution time needed to run the simulations that compose the experiment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An object-oriented modeling scheme for distributed applications

    Page(s): 292 - 299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB) |  | HTML iconHTML  

    A modeling approach is introduced for distributed applications. During the last few years computer networks have dominated the world, forcing the development of applications operating in a network environment. Since new technologies, such as WWW, middleware, and co-operative software, have emerged, distributed application functionality has become rather complex and the requirements from the underlying network increased considerably. Distributed applications usually consist of interacting services provided in a multi-level hierarchy. In order to effectively evaluate their performance through simulation, we introduce a multilayer object-oriented modeling scheme that facilitates the in-depth and detailed distributed application description and supports most popular architectural models, such as the client/server model and its variations. Application functionality is described using predefined operations, which can be further decomposed into simpler ones through a multilayer hierarchy resulting in elementary actions that indicate primitive network operations, such as transfer or processing. Important features of the modeling scheme are extendibility and wide applicability. The simulation environment built according to this modeling scheme is also presented along with an example indicating key features and sample results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PPIM-SIM: an efficient simulator for a parallel processor in memory

    Page(s): 117 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB) |  | HTML iconHTML  

    The gap between the speed of logic and DRAM access is widening. Traditional processors hide some of the mismatch in latency using techniques such as multi-level caches, instruction prefetching and memory interleaving/pipelining. Even with larger caches, cache miss rates are higher than the rate at which memory can provide data. Moreover, the memory bandwidth visible at the system bus forms a bottleneck. Therefore, there are compelling reasons for integrating DRAM and logic including: (i) the bandwidth available within the chip is many order of magnitude higher than that at the memory bus at a significantly lower access time and with lower power dissipation; and (ii) as typical workloads shift towards data-intensive/multimedia applications, the wide bandwidth can be effectively utilized. To effectively support data-intensive applications, we designed a Parallel Processor in Memory (PPIM) processor. PPIM is based on a distributed data-parallel architecture with limited support for control parallelism. The paper presents ppim-sim, a cycle-accurate simulator that models PPIM processor in software and is capable of running PPIM program binaries. Exponents conducted to evaluate the simulation using a number of data-intensive application models for varying PPIM configurations are presented. It was observed from the experiments that ppim-sim not only simulates large models in tractable amounts of time, but also is memory-efficient. In addition, the parameterized design of ppim-sim coupled with robust and effective interfaces makes it a research tool to study different processing element and controller architectures implemented in memory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A tool for the design and evaluation of fibre channel storage area networks

    Page(s): 133 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (692 KB) |  | HTML iconHTML  

    The fast growth of data intensive applications has caused a change in the traditional storage model. The server-to-disk approach, usually implemented with SCSI buses, is being replaced by storage area networks (SAN), which enable storage to be externalized from servers, thus allowing storage devices to be shared among multiple servers. A SAN is a separate network for storage, isolated from the messaging network and optimized for the movement of data between servers and storage devices. Nowadays, most current SAN use Fibre Channel as the technology to move data between servers and storage devices. In order to design and evaluate the performance of these systems it is necessary to have adequate tools. Usually, performance evaluation may be based on analytical modeling or simulation. Each of them differs in their scope and applicability or simulation modeling technique offers more freedom, flexibility, and accuracy than analytical methods. Thus, when evaluating the performance of SAN, simulation modeling should be used. We present the main capabilities of a simulator for Fibre Channel SAN, focusing on its input parameters and output variables. We also show several simple examples of performance measurements that can be obtained using this tool View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation-based engineering of complex adaptive systems using a classifier block

    Page(s): 243 - 250
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB) |  | HTML iconHTML  

    A complex adaptive system (CAS) is a network of communicating, intelligent agents where each agent adapts its behavior in order to collaborate with other agents to achieve overall system goals. Further, the overall system often exhibits emergent behavior that can not be achieved by any proper subset of agents alone. A graphical simulation library called Operational Evaluation Modeling for Context-Sensitive Systems (OpEMCSS) has been developed to simulate CAS. This simulation library includes a classifier event action block that is a forward chaining, expert system controller. The classifier event action block employs evolutionary rule induction methods to discover rules during system design that achieve CAS agent self-organization and adaptation. A part production system is discussed that includes a workflow manager agent that employs the classifier event action block View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and analysis of distributed state space generation for timed Petri nets

    Page(s): 93 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    The performance of distributed generation of the state space for timed Petri nets is rather sensitive to the type of analyzed nets. In order to analyze the performance of such an application, the distributed generation is represented by a timed Petri net and the behavior of this net is studied, using a simulation technique, for different combinations of modeling parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pricing QoS: simulation and analysis

    Page(s): 193 - 199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    Ever since its inception, the Internet has seen unprecedented growth. Consequently, researchers have been actively looking for means and ways to influence the behavior of its selfish users. Pricing was soon realized as the regulatory tool to provide proper incentives so that users' self-interest will lead them to modify their usage according to their needs. This leads to better overall network utilization and enhanced users' satisfaction. In this work, a scalable pricing framework for QoS-capable networks supporting real-time, adjustable real-time, and non-real-time traffic is studied. The scheme, which belongs to usage-based methods, is independent of the underlying network and the mechanisms for QoS provisioning. The framework is credit-based ensuring the fairness, comprehensibility, and predictability of usage cost. On the other hand, it provides means for the network providers to ensure, with high probability, cost recovery and profit, competitiveness of prices, and encouragement of client behaviors that will enhance the network's efficiency. This is achieved by appropriate charging mechanisms and suitable incentives. The implementation and usage costs of the framework are low. Simulation results suggest that users have better overall satisfaction; better network utilization is achieved while reduced call blocking probability is observed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fidelity evaluation framework

    Page(s): 109 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    While the modeling and simulation community commonly uses the word fidelity, there exists no clearly accepted definition or method of measuring fidelity. We make the following contributions. We present a new approach for measuring fidelity: the fidelity evaluation framework (FEF), that uses a referent, or a formal representation of reality that is intermediate between reality and the simulation. This foundation is advantageous because isolates subjectivity from the fidelity evaluation to well-defined framework components: development of the referent and assignment of weights to different referent components. We then provide the first example of the composition of a detailed referent and two models based on a real-world system with the FEF. We propose and illustrate three new methods of computing fidelity objectively within the FEF: category-based, model-based, and weight-based. This experiment proved that the FEF can provide meaningful and useful measurements of fidelity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural net simulation: SFSN model for image compression

    Page(s): 325 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    We present a recent simulation of our neural net model for image compression (SFSN) which is based on the Kohonen SOFM system. Our previous work was limited to a certain scope of image domains. Our updated simulator is meant to be very general via a well-constructed universal codebook for each domain of images. It shows an improvement over the traditional peer non-neural models (e.g., wavelet and JPEG) in some image domains. We present our neural compression simulator and our most recent results in some important domains, such as satellite and document imaging View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simulation based study of on-demand routing protocols for ad hoc wireless networks

    Page(s): 85 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB) |  | HTML iconHTML  

    Ad hoc networks are wireless, mobile networks that can be set up anywhere and anytime without the aid of any established infrastructure or centralized administration. Because of the limited range of each host's wireless transmission, to communicate with hosts outside its transmission range, a host needs to enlist the aid of its nearby hosts in forwarding packets to the destination. However, since there is no stationary infrastructure such as base stations, each host has to act as a router for itself. A routing protocol for ad hoc networks is executed on every host and is therefore subject to the limit of the resources at each mobile host. A good routing protocol should minimize the computing load on the host as well as the traffic overhead on the network. Therefore, a number of routing protocols have been proposed for ad hoc wireless networks. We focus upon on-demand schemes. We study and compare the performance of the following three routing protocols AODV, CBRP and DSR. A variety of workload and scenarios, as characterized by mobility, load and size of the ad hoc network were simulated. Our results indicate that despite its improvement in reducing route request packets, CBRP has a higher overhead than DSR because of its periodic hello messages while AODV's end-to-end packet delay is the shortest when compared to DSR and CBRP View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault identification in networks by passive testing

    Page(s): 277 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (612 KB) |  | HTML iconHTML  

    We employ the finite state machine (FSM) model for networks to investigate fault identification using passive testing. First we introduce the concept of passive testing. Then, we introduce the FSM model with necessary assumptions and justification. We introduce the fault model and the fault detection algorithm using passive testing. Extending this result, we develop the theorems and algorithms for fault identification. An example is given illustrating our approach. Then, extensions to our approach are introduced to achieve better fault identification. We then illustrate our technique through a simulation of a practical X.25 example. Finally future extensions and potential trends are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New queuing strategy for large scale ATM switches

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    We study the different buffering techniques used in the literature to solve the contention problem in ATM switching architectures. The objective of our study is to determine the buffer requirements needed to achieve a given quality of service (e.g., a given cell loss probability). Based on this study, we propose a combined central and output queuing (CCOQ) technique to be used in designing large-scale ATM switches. Also, we propose a general design technique for an N×N large-scale ATM switch with a suitable CCOQ buffer size to reduce both the cell loss probability and the complexity of the memory modules. The switch has to be designed such that it can be implemented using the smallest number of VLSI chips possible. It should be also reliable for commercial use. The switch should support multicast and priority control functions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation of a DEVS-JavaBean simulation environment

    Page(s): 333 - 338
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    A component-based simulation technology is brought up. This component-based simulation environment is based on the DEVS (Discrete Event System Specification) formalism and Sun's JavaBean technology. The DEVS provides a means of specifying a mathematical object called a system. JavaBean technology allows the modelers to drag and drop the software components visually and to combine specific JavaBean components to build their software systems. This paper introduces the DEVS-JavaBean simulation environment that contains basic JavaBean components of DEVS so that a modeler can visually change bean properties and relationships between JavaBeans. The reusability of components saves modelers in model development time. In addition, the convenience of JavaBean components such as drag-and-drop and visual modeling makes our simulation environment easy to learn and use View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiuser extensions to the Interactive Land Use VRML Application (ILUVA)

    Page(s): 159 - 166
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB) |  | HTML iconHTML  

    Virtual reality simulations implemented in the Virtual Reality Modeling Language (VRML) provide the ability to create Web-based simulation environments with the mark of realism provided by three-dimensional representations. The Interactive Land Use VRML Application (ILUVA) enables users to perform simple site planning by creating building sites and then populating them with buildings, roadways, landscaping, and etc. We describe multiuser extensions that integrate a database interface to allow multiple users to use and save past sessions. The database interface is implemented using servlets and the JDBC provided in the Java core View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Explorative modeling for prioritizing liver transplantation waiting lists

    Page(s): 303 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB) |  | HTML iconHTML  

    Health care systems are known to be quite complex in structure and operation. The technology of liver transplantation is one of the aspects of health care systems that has always been difficult to grasp. The main problem with liver transplantation is the fact that it is not easy to study using traditional quantitative tools. The paper demonstrates the use of discrete event simulation as an explorative tool to help stakeholders to understand the behavior of the system so that they can achieve informed decisions with regard to the prioritization of patients waiting for transplantation. The paper also shows the construction of a tailor-made package (LiverSim) and provides an example of how this package is used by the stakeholders to assist in the evaluation process. Some final lessons are drawn that simulation helps in exploring more issues outside the boundaries of quantitative results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A connection formalism for the solution of large and stiff models

    Page(s): 258 - 265
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB) |  | HTML iconHTML  

    Realistic computer systems are hard to model using state-based methods because of the large state spaces they require and the likely stiffness of the resulting models (because activities occur at many time scales). One way to address this problem is to decompose a model into submodels, which are solved separately but exchange results. We call modeling formalisms that support such techniques “connection formalisms”. We describe a new set of connection formalisms that reduces state-space size and solution time by identifying submodels that are not affected by the rest of a model and solving them separately, A result from each solved submodel is then used in the solution of the rest of the model. We demonstrate the use of two of these connection formalisms by modeling a real-world file server in the Mobius modeling framework. The connected models were solved one to two orders of magnitude faster than the original model, with one of these decomposition techniques introducing an error of less than 11% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation-based average case analysis for parallel job scheduling

    Page(s): 15 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (920 KB) |  | HTML iconHTML  

    This paper analyses the resource allocation problem in parallel job scheduling, with emphasis given to gang service algorithms. Gang service has been widely used as a practical solution to the dynamic parallel job scheduling problem. To provide a sound analysis of gang service performance, a novel methodology based on the traditional concept of competitive ratio is introduced. Dubbed dynamic competitive ratio, the new method is used to do an average case analysis based on simulation of resource allocation algorithms. These resource allocation algorithms apply to the gang service scheduling of a workload generated by a statistical model. Moreover, dynamic competitive ratio is the figure of merit used to evaluate and compare packing strategies for job scheduling under multiple constraints. It is shown that for the unidimensional case there is a small difference between the performance of best fit and first fit; first fit can hence be used without significant system degradation. For the multidimensional case, when memory is also considered, we conclude that the resource allocation algorithm must try to balance the resource utilization in all dimensions simultaneously, instead of given priority to only one dimension of the problem View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.