By Topic

Simulation Symposium, 2008. ANSS 2008. 41st Annual

Date 13-16 April 2008

Filter Results

Displaying Results 1 - 25 of 45
  • [Front cover]

    Publication Year: 2008 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2008 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2008 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (105 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2008 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2008 , Page(s): v - viii
    Save to Project icon | Request Permissions | PDF file iconPDF (249 KB)  
    Freely Available from IEEE
  • Message from the Chairs

    Publication Year: 2008 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (140 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2008 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (260 KB)  
    Freely Available from IEEE
  • External reviewers

    Publication Year: 2008 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (254 KB)  
    Freely Available from IEEE
  • Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities for Modeling and Simulation Communities

    Publication Year: 2008 , Page(s): 3
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (615 KB) |  | HTML iconHTML  

    Summary form only given. Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as data centers and grids. We present various case studies on the use of GridSim in modeling and simulation of business grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prototyping and Analysis of an Ontology-Based Personalized Web Service Architecture

    Publication Year: 2008 , Page(s): 7 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (482 KB) |  | HTML iconHTML  

    Current Web service technologies provide a set of well-defined syntaxes of service discovery, composition, and invocation. However, they lack the support for semantic interoperability and personalization. This paper briefly an agent-driven, P2P-based, distributed service registry architecture referred to as the personalized Web service architecture (PWSA) that supports personal-based service discovery, composition, and invocation. A prototype implementation and a performance analysis study of PWSA demonstrate the self-organization property of the architecture and its scalability in terms of the number of services and the number of search queries executed by user agents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-Layer Response Surface Methodology Applied to Wireless Mesh Network VoIP Call Capacity

    Publication Year: 2008 , Page(s): 15 - 22
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1047 KB) |  | HTML iconHTML  

    Wireless Mesh networking is a recent technology proposed to extend and improve the Wireless LAN capabilities to even larger geographical areas. Parallel advances in network-delivered services have created a situation where more and more end-users will want to use richer services based on the network. However, wireless networks have capacity bottlenecks due to reasons of their broadcast nature, MAC layer constraints among others. The paradigm of cross-layer design is especially relevant in wireless networks so as to effect key optimizations to improve performance. In this paper, we look at the specific service of Voice over IP over Wireless Mesh Networks and quantify the capacity constraints using algebraic functions, fit to experimental (simulation) results. We use the Response Surface Methodology for this purpose. We also look at how application level parameters can be adapted to MAC level parameters so as to improve total call capacity, by use of our cross layer ";metamodel";. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TRAILS, a Toolkit for Efficient, Realistic and Evolving Models of Mobility, Faults and Obstacles in Wireless Networks

    Publication Year: 2008 , Page(s): 23 - 32
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    We present a new simulation toolkit called TRAILS (Toolkit for Realism and Adaptivity In Large-scale Simulations), which extends the ns-2 simulator by adding important functionality and optimizing certain critical simulator operations. The added features provide the tools to study wireless networks of high dynamics. TRAILS facilitates the implementation of advanced mobility patterns, obstacle presence and disaster scenarios, and failures injection that can dynamically change throughout the execution of the simulation. Moreover, we define a set of utilities that enhance the use of ns-2. This functionality is implemented in a simple and flexible architecture, that follows design patterns, object oriented and generic programming principles, maintaining a proper balance between reusability, extendability and ease of use. We evaluate the performance of TRAILS and show that it offers significant speed-ups regarding the execution time of ns-2 in certain important, common wireless settings. Our results also show that this is achieved with minimum overhead in terms of memory usage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SCAR - Scattering, Concealing and Recovering Data within a DHT

    Publication Year: 2008 , Page(s): 35 - 42
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (473 KB) |  | HTML iconHTML  

    This paper describes a secure and reliable method for storing data in a distributed hash table (DHT) leveraging the inherent properties of the DHT to provide a secure storage substrate. The framework presented is referred to as ";Scatter, Conceal, and Recover"; (SCAR). The standard method of securing data in a DHT is to encrypt the data using symmetrical encryption before storing it in the network. SCAR provides this level of security, but also prevents any known cryptoanalysis from being performed. It does this by breaking the data into smaller blocks and scattering these blocks throughout the DHT. Hence, SCAR prevents any unauthorized user from obtaining the entire encrypted data block. SCAR uses hash chains to determine the storage locations for these blocks within the DHT. To ensure storage availability, SCAR uses an erasure coding scheme to provide full data recovery given only partial block recovery. This paper first presents the SCAR framework and its associated protocols and mechanisms. The paper then discusses a prototype implementation of SCAR, and presents a simulation-based experimental study. The results show that in order for the erasure coding techniques used by SCAR to be effective, P2P nodes must sufficiently available. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Simulation Study of Common Mobility Models for Opportunistic Networks

    Publication Year: 2008 , Page(s): 43 - 50
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    Understanding mobility characteristics is important for the design and analysis of routing schemes for mobile ad hoc networks (MANETs). This is especially true for mobile opportunistic networks where node mobility is utilized to achieve message delivery. In this paper, we study the properties of common mobility models. Specifically, we study inter-contact times of mobile nodes in random waypoint and random direction mobility models under opportunistic network settings. We also introduce a modified random waypoint model with hot-spots to study its mobility properties. Through extensive simulation study, we provide simulation results for mobility properties of random waypoint and random direction models. Further, our modified random waypoint with hot-spots model is also found to show an approximate power-law and exponential inter-contact time dichotomy found in real-world mobility traces as described in recent literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation of Buffer Management Policies in Networks for Grids

    Publication Year: 2008 , Page(s): 51 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (647 KB) |  | HTML iconHTML  

    Grid technologies are emerging as the next generation of distributed computing, allowing the aggregation of resources that are geographically distributed across different locations. The network remains an important requirement for any Grid application, as entities involved in a Grid system (such as users, services, and data) need to communicate with each other over a network. The performance of the network must therefore be considered when carrying out tasks such as scheduling, migration or monitoring of jobs. Network buffers management policies affect the network performance, as they can lead to poor latencies (if buffers become too large), but also leading to a lot of packet droppings and low utilization of links, when trying to keep a low buffer size. Therefore, network buffers management policies should be considered when simulating a real Grid system. In this paper, we introduce network buffers management policies into the GridSim simulation toolkit. Our framework allows new policies to be implemented easily, thus enabling researchers to create more realistic network models. Fields which will harness our work are scheduling, or QoS provision. We present a comprehensive description of the overall design and a use case scenario demonstrating the conditions of links varied over time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Efficient Approach for Location Updating in Mobile Ad Hoc Networks

    Publication Year: 2008 , Page(s): 61 - 67
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (303 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of location updating in mobile ad-hoc networks (MANETs). We propose a node stability-based location updating approach. In order to optimize the routing, most of the existing routing algorithms use some mechanism for determining the nodes' neighbours. This information is stored in a table called the neighbour table. The updating of the neighbour table is referred to as location updating. Preliminary set of results show that our proposed algorithm outperforms the typical algorithm used for location updating in existing algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities for Modeling and Simulation Communities

    Publication Year: 2008 , Page(s): 68 - 81
    Cited by:  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1194 KB) |  | HTML iconHTML  

    Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as Data Centers and Grids. We present various case studies on the use of GridSim in modeling and simulation of Business Grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Primer for Real-Time Simulation of Large-Scale Networks

    Publication Year: 2008 , Page(s): 85 - 94
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (903 KB) |  | HTML iconHTML  

    Real-time network simulation refers to simulating computer networks in real time so that the virtual network can interact with real implementations of network protocols, network services, and distributed applications. In this paper, we present the motivation behind real-time network simulation and compare it against other major networking research tools, including analytical models, physical testbeds, simulation, and emulation. We introduce PRIME, a parallel real-time network simulator, and provide a summary of techniques that allow PRIME to model large-scale networks and interact with many real applications under the real-time constraint. We also discuss ongoing research efforts that will allow real-time network simulation to assume important roles at supporting future networking research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Executable Protocol Models as a Requirements Engineering Tool

    Publication Year: 2008 , Page(s): 95 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (448 KB) |  | HTML iconHTML  

    Functional prototypes and simulations are a well recognized and valued tool for building a shared understanding of requirements between users and developers. However, the development of such artifacts does not sit well with traditional modeling techniques, which do not lend themselves to direct execution. Consequently building prototypes and simulations becomes a diversion from the mainstream development process, and sometimes even competes with it. We propose that the resolution to this conflict lies in promoting the role of executable behavioral models, so that artifacts supporting behavioral simulation are a by-product of the mainstream modeling process. We discuss why conventional modeling techniques are not suited to this, and we describe an innovative behavioral modeling technique, Protocol Modeling, that is well suited to direct execution. Using Protocol Modeling, a behavioral entity (business object or process) is modeled in terms of its event protocol: the conditions under which it accepts or refuses events. Such models capture the behavioral integrity rules at the level of business events; and can be composed using the semantics of Hoare's CSP, allowing concise and incremental representation. Direct execution of the model is achieved using a tool that simulates a normal user interface, so that non-technical stakeholders can review and explore behavior while requirements are being solidified. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CODES: An Integrated Approach to Composable Modeling and Simulation

    Publication Year: 2008 , Page(s): 103 - 110
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (510 KB) |  | HTML iconHTML  

    In component-based simulation, models developed in different locations and for specific purposes can be selected and assembled in various combinations to meet diverse user requirements. This paper proposes CODES (COmposable Discrete-Event scalable Simulation), an approach to component-based modeling and simulation that supports model reuse across multiple application domains. A simulation component is viewed by the modeller as a black box with an in- and/or out-channel. The attributes and behavior of the component abstracted as a meta-component are described using COML (COmponent Markup Language), a markup language we propose for representing simulation components. The integrated approach, supported by a proposed COSMO (COmponent-oriented Simulation and Modeling Ontology) ontology, consists of four main steps. Component discovery returns a set of syntactically valid model components. Syntactic composability is determined by our proposed EBNF syntactic composition rules. Validation of semantic composability is performed using our proposed data and behavior alignment algorithms. The semantically valid simulation component is subsequently stored in a model repository for reuse. As proof of concept, we discuss a prototype implementation of the CODES framework using queueing system as an application domain example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Computation of Hyper-exponential Approximations of the Response Time Distribution of MMPP/M/1 Queues

    Publication Year: 2008 , Page(s): 113 - 120
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB) |  | HTML iconHTML  

    Input characterization to describe the flow of incoming traffic in network systems, such as the grid and the WWW, is often performed by using Markov modulated poisson processes (MMPP). Therefore, to enact capacity planning and quality-of-service (QoS) oriented design, the model of the servers that receive the incoming traffic is often described as a MMPP/M/1 queue. In a work we have provided an approximate solution for the response time distribution of the MMPP/M/1 queue, which is based on a hyper-exponential process obtained via a weighted superposition of the response time distributions of M/M/l queues. Compared to exact solution methods, or simulative techniques, the aim of this approximation is to provide the potential for more efficient model solution, so to enable, e.g., real-time what-if analysis in system reconfiguration scenarios. In this paper, we show how fast the computation can be supported in practical settings by ad-hoc techniques allowing the hyper-exponential model to be solved with no iterative or numerical costly steps, which would otherwise be required in order to compute the length of transient phases due to state switches in the MMPP arrival process. An application to the context of performance analysis of a grid system is also shown, supporting the efficiency of our proposal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Beyond the Model of Persistent TCP Flows: Open-Loop vs Closed-Loop Arrivals of Non-persistent Flows

    Publication Year: 2008 , Page(s): 121 - 130
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (623 KB) |  | HTML iconHTML  

    It is common for simulation and analytical studies to model Internet traffic as an aggregation of mostly persistent TCP flows. In practice, however, flows follow a heavy- tailed size distribution and their number fluctuates significantly with time. One important issue that has been largely ignored is whether such non-persistent flows arrive in the network in an open-loop (say Poisson) or closed-loop (interactive) manner. This paper focuses on the differences that the TCP flow arrival process introduces in the generated aggregate traffic. We first review the Processor Sharing models for such flow arrival processes as well as the corresponding TCP packet-level models. Then, we focus on the queueing performance that results from each model, and show that the closed-loop model produces lower loss rate and queueing delays than the open-loop model. We explain this difference in terms of the increased traffic variability that the open-loop model produces. The cause of the latter is that the flow arrival rate in the open-loop model does not reduce upon congestion. We also study the transient effect of congestion events on the two models and show that the closed-loop model results in congestion-responsive traffic while the open-loop model does not. Finally, we discuss implications of the differences between the two models in several networking problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Analytical Model and Performance Evaluation of Transport Protocols for Wireless Ad Hoc Networks

    Publication Year: 2008 , Page(s): 131 - 138
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    Performance of Transport Control Protocol (TCP), designed for wired networks, degrades significantly over wireless links and provides abysmal throughput because TCP assumes that all packet losses are caused by congestion. However, in wireless ad-hoc networks, packet losses could be due to several reasons such as link loss, node mobility, and faulty nodes, to name a few. Current techniques for improving TCP do not consider packet drops by faulty nodes. We propose a novel idea of distinguishing between wireless and congestion losses by using the concept of Kleinrock's "power" metric. Based on this idea we propose two reliable transport protocols, TCP- Monet and TCP-Sec, that classify packet losses based on current connection status and react accordingly. TCP-Monet distinguishes between congestion and wireless losses, while TCP-Sec distinguishes amongst losses due to congestion, wireless errors, and packet drops by faulty nodes. We develop an analytic model of throughput in the presence of wireless losses, congestion losses, and packet drops by faulty nodes. We conducted simulation experiments using the ns-2 simulator and our experiments demonstrate that this model is able to more accurately predict throughput when there are wireless losses and faulty nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Black-Box Modeling Techniques for Modern Disk Drives Service Time Simulation

    Publication Year: 2008 , Page(s): 139 - 145
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (502 KB) |  | HTML iconHTML  

    One of the most common techniques to evaluate the performance of a computer I/O subsystem performance has been found on detailed simulation models including specific features of storage devices like disk geometry, zone splitting, caching, read-ahead buffers and request reordering. However, as soon as a new technological innovation is added, those models need to be reworked to include new devices making difficult to have general models up to date. Another alternative is modeling a storage device as a black-box probabilistic model, where the storage device itself, its interface and the interconnection mechanisms are modeled as a single stochastic process, defining the service time as a random variable with an unknown distribution. This approach allows generating disk service times needing less computational power by means of a variate generator included in a simulator. This approach allows to reach a greater scalability in the I/O subsystems performance evaluation by means of simulation. In this paper, we present a method for building a variate generator from service time experimental data. In order to build the variate generator, both real workloads and synthetic workloads may be used. The workload is used to feed the evaluated disk to obtain service time measurements. From experimental data we build a variate generator that fits the disk service times distribution. We also present a use case of our method, where we have obtained a relative error ranging from 0.45% to 1%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Mutation Testing and Simulation on OWL-S Specified Web Services

    Publication Year: 2008 , Page(s): 149 - 156
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    Web Ontology Language for Services (OWLS) is a standard XML-based language for specifying workflows and integration semantics among Web services (WS), which form composite WS. This paper analyzes the fault patterns of OWL-S specified composite WS and their workflows, proposes an ontology-based mutation analysis method, and applies specification-based mutation techniques for composite WS simulation and testing. Four categories of OWL-S mutant operators are specified, including data mutation, condition mutation, control flow mutation, and data flow mutation. Finally, the paper studies the ontology-based input mutation technique using a BookFinder service as a case study, which shows that ontology-based mutation provides viable test adequacy criteria for testing OWL-S specified composite WS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.