By Topic

Simulation Symposium, 2005. Proceedings. 38th Annual

Date 4-6 April 2005

Filter Results

Displaying Results 1 - 25 of 45
  • 38th Annual Simulation Symposium - Cover

    Publication Year: 2005 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • Proceedings. 38th Annual Simulation Symposium

    Publication Year: 2005
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • 38th Annual Simulation Symposium - Copyright Page

    Publication Year: 2005 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB)  
    Freely Available from IEEE
  • 38th Annual Simulation Symposium - Table of contents

    Publication Year: 2005 , Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (57 KB)  
    Freely Available from IEEE
  • Message from the General Chair

    Publication Year: 2005 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2005 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE
  • External reviewers

    Publication Year: 2005 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • Call for Papers: 39th Annual Simulation Symposium

    Publication Year: 2005 , Page(s): xi - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • Why we still don't know how to simulate networks

    Publication Year: 2005
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (36 KB)  

    Discrete event simulation has been used in the evaluation of computer communications networks for three or four decades. Over this period of time our simulation capability has improved significantly due to the efforts of the simulation research community. There has also been significant progress over the years by the networking research community in understanding the use of simulation in the design and evaluation of network architectures, protocols and services. I argued in this talk that, despite these advances, we still do not have an acceptable and widely-used methodology to simulate networks which often leads to the questioning of the credibility of simulation results. This can be attributed to many reasons, including: 1) confusion and uncertainty within the networking community regarding the role that simulation plays in networking research; 2) fundamental limits that make it basically impossible to simulate Internet-scale networks; 3) the difficulty in building realistic network models; and 4) the lack of acceptable standards for validity and repeatability of simulation experiments. I will suggest a high-level research agenda that addresses these issues with the ultimate aim of enriching the networking research community's ability to use simulation as a meaningful tool for performance evaluation and prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A context-aware data forwarding algorithm for sensor networks

    Publication Year: 2005 , Page(s): 7 - 14
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Recent advances in wireless communication and technologies have given rise to low-cost sensor networks. Sensor networks comprise of low-cost, low-power nodes that are densely deployed in the environment to monitor a specific state of the environment, for example: temperature, light, sound, speed or radiation. This paper presents a new data forwarding algorithm for sensor networks that takes into consideration the direction of the message, the positional relevance of a node to the message and the available power at that node. We conclude this paper by discussing an experimental study of the performance of the proposed data forwarding protocol for sensor networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation techniques for the analysis of large traffic-groomed tandem optical networks

    Publication Year: 2005 , Page(s): 15 - 22
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    We consider a traffic-groomed optical network consisting of N nodes arranged in tandem. This optical network is modeled by a tandem queueing network of multirate loss queues with simultaneous resource possession. Two approximation methods are proposed for the analysis of this queueing network assuming that the total number of servers in each multi-rate loss queue is very large. The accuracy of these two approximation methods as well as the accuracy of the familiar link independence algorithm is verified by simulation. As an additional contribution, the simulation estimates have been obtained using adaptive rare-event simulation techniques based on importance sampling. This is because call blocking probabilities tend to be very small and it is not feasible to estimate them using standard simulation techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A methodology for the optimal configuration of TCP traffic in network simulations under link load constraints

    Publication Year: 2005 , Page(s): 25 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Given that over 90% of the Internet load is carried by TCP, most network simulation studies use TCP flows to generate the background traffic. A basic, but unresolved, question however is: how can one decide how many TCP flows to simulate from one network node to another? Simulating too many flows on a link can cause an unrealistically high loss rate on that link, while simulating too few flows can result in undesirably light load conditions. Similarly, to simulate realistic network conditions, one has to carefully control the load distribution on various network links (e.g., edge vs. core links), as well as the hop count (path length) of the simulated TCP flows. Previous simulation studies have dealt with these issues in a trial-and-error manner, experimenting with several traffic configurations until a realistic distribution of link load and loss rate is achieved. In this paper, we present a methodology that determines the number of TCP flows that should be simulated between each pair of nodes in a network, based on the network topology, a specification of the utilization and loss rate for certain links, and an average number of hops for the TCP flows. Our methodology is based on a linear program formulation that, while meeting the utilization and loss rate specifications, minimizes the number of required TCP flows. This optimization criterion minimizes the memory requirement of the simulation. Our evaluations show that the proposed methodology can closely approximate the specified link conditions in terms of utilization and loss rate. We also analyze the largest approximation errors and reveal their causes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis for multi-service networks with congestion-based pricing for QoS traffic

    Publication Year: 2005 , Page(s): 33 - 40
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1552 KB) |  | HTML iconHTML  

    There has been recent interest in employing unified pricing schemes for supporting both QoS and best-effort traffic within multiservice networks. This paper presents an analytical approach for the design and analysis of this class of network pricing control. In particular, we propose an approximation method to estimate overall user value arising from all traffic types, given the routes for QoS traffic. Our approach considers admission control as both a resource allocation issue for QoS traffic, and a pricing control decision for best-effort and QoS traffic. We show how to convert the price-based admission control decision to a resource-based blocking problem. We also present the numerical results of performance analysis for multiservice networks using our proposed approximation method. Our approach can be used to predict the performance and parameter settings for a wide range of combined QoS and best-effort pricing schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Route recovery mechanisms for ad hoc networks equipped with switched single beam antennas

    Publication Year: 2005 , Page(s): 41 - 48
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    In this paper we propose a novel three phase route recovery mechanism for routing over switched single beam directional antennas. We enhance the popular dynamic source routing (DSR) protocol to the underlying directional layer and include an improved broadcast mechanism to facilitate an efficient route discovery. In the event of a link failure, our optimized directional routing protocol (DRP) proceeds in three phases to recover the route to the destination: (i) antenna beam handoff (ii) local route recovery and (iii) zonal route recovery. We have compared the performance of DRP with the DSR protocol over both omnidirectional and directional antenna models in conditions of varying network mobility. Our results clearly indicate that DRP is robust to link failures even in highly mobile scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Internet delay-based clustering

    Publication Year: 2005 , Page(s): 51 - 59
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB) |  | HTML iconHTML  

    The scalability of routing in large scale networks, such as the Internet, is critical to achieve low delay and high throughput. The lack of scalability of routing protocols in large-scale networks results from the prohibitive overhead incurred during dissemination of path metric values across the network. This paper addresses this problem and proposes a cluster-based scheme, referred to as d-median, which strikes a balance between scalability and routing efficiency. In this scheme, the link metric information is exchanged on a per-cluster basis, rather than on a per-node basis thereby reducing considerably the routing overhead. The simulation results show that the scheme exhibits better performance compared to existing models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power conservation schemes for energy efficient data propagation in heterogeneous wireless sensor networks

    Publication Year: 2005 , Page(s): 60 - 71
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    We propose, implement and evaluate new energy conservation schemes for efficient data propagation in wireless sensor networks. Our protocols are adaptive, i.e. locally monitor the network conditions and accordingly adjust towards optimal operation choices. This dynamic feature is particularly beneficial in heterogeneous settings and in cases of redeployment of sensor devices in the network area. We implement our protocols and evaluate their performance through a detailed simulation study using our extended version of ns-2. In particular we combine our schemes with known communication paradigms. The simulation findings demonstrate significant gains and good trade-offs in terms of delivery success, delay and energy dissipation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perfect simulations for random trip mobility models

    Publication Year: 2005 , Page(s): 72 - 79
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    The random trip model was recently proposed as a generic mobility model that contains many particular mobility models, including the widely-known random waypoint and random walks, and accommodates more realistic scenarios. The probability distribution of the movement of a mobile in all these models typically varies with time and converges to a "steady state" distribution (viz- stationary distribution), whenever the last exists. Protocol performance during this transient phase and in steady-state may differ significantly. This justifies the interest in perfect sampling of the initial node mobility state, so that the simulation of the node mobility is perfect, i.e. it is in steady state throughout a simulation. In this work, we describe implementation of the perfect sampling for some random trip models. Our tool produces a perfect sample of the node mobility state, which is then used as input to the widely-used ns-2 network simulator. We further show some simulation results for a particular random trip mobility model, based on a real-world road map. The performance metrics that we consider include various node communication properties and their evolution with time. The results demonstrate difference between transient and steady-state phases and that the transient phase can be long lasting (in the order of a typical simulation duration), if the initial state is drawn from a non steady-state distribution. The results give strong arguments in favor to running perfect simulations. Our perfect sampling tool is available to public at: http://www.cs.rice.edu/∼santa/research/mobility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of multicast feedback control mechanisms

    Publication Year: 2005 , Page(s): 80 - 87
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    In reliable multicast applications, packet loss needs to be reported by having group members send feedback messages. This results in the well-known feedback implosion problem. The available feedback control mechanisms can be classified as timer-based, hierarchy-based and router-assisted, among which the timer-based approach is more preferable due to its simplicity and flexibility. This paper compares the performance of a set of multicast protocols that use either the traditional timer-based feedback control or a combination of the traditional timers and some representatives in the feedback control. We investigate how the use of representatives and deterministic timers improves feedback control performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation verification and validation by dynamic policy enforcement

    Publication Year: 2005 , Page(s): 91 - 98
    Cited by:  Papers (3)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    This paper presents a new verification and validation (V&V) technique for simulation using dynamic policy enforcement. Constraints are formally specified as policies, and they will be used to check whether simulation satisfies these policies at runtime. This paper also proposes a development framework where policies are developed along with system development and V&V. Once policies are extracted from requirements and specified in a policy specification language, the rest of the development work is automatically performed by the tools in the framework. Both security requirements and functional requirements can be specified as policies and dynamically enforced during the simulation. An automated tool is available for policy specification and enforcement, and it is fully integrated with the simulation infrastructure. This paper also presents a sample system that is modeled and simulated, and policies are used to verify and validate the system model. The paper also discusses the overhead imposed to perform this kind of automated policy-based V&V compared to the hard-coded implementation of the same approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation of discrete phase-type distributions

    Publication Year: 2005 , Page(s): 99 - 106
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    The analysis of discrete stochastic models such as generally distributed stochastic Petri nets can be done using state space-based methods. The behavior of the model is described by a Markov chain that can be solved mathematically. The phase-type distributions that are used to describe non-Markovian distributions have to be approximated. An approach for the fast and accurate approximation of discrete phase-type distributions is presented. This can be a step towards a practical state space-based simulation method, whereas formerly this approach often had to be discarded as unfeasible due to high memory and runtime costs. Discrete phases also fit in well with current research on proxel-based simulation. They can represent infinite support distribution functions with considerably fewer Markov chain states than proxels. Our hope is that such a combination of both approaches will lead to a competitive simulation algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Could enough samples be more important than better designs for computer experiments?

    Publication Year: 2005 , Page(s): 107 - 115
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    A study was conducted to compare fifteen approaches to improve Latin hypercube designs for computer experiments, based on simulation tests and statistical analyses ANOVA. Kriging models were employed to approximate twenty test functions. Validation at 5000 or 10,000 points was conducted to find prediction errors. The results show that there are statistically significant differences between the approximate results of employing different designs, but more often the difference is not significant. In most cases, the number of runs or the sample size has stronger impact on the accuracy than do different designs. When the dimension is low, a small size increment can often reduce more error than do "better designs". To get the desired precision by one-stage method, enough samples may be needed regardless what design is used. Sample size determination may need much more attention for computer experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving scalability of network emulation through parallelism and abstraction

    Publication Year: 2005 , Page(s): 119 - 129
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    One approach to network emulation involves simulating a virtual network with a real-time network simulator and providing an I/O interface that enables interaction between real hosts and the virtual network. This allows real protocols and applications to be tested in a controlled and repeatable environment. To reflect conditions of large networks such as the Internet it is important that the emulation environment be scalable. This paper examines improvements in scalability of the virtual network achieved through the use of parallel discrete event simulation and simulation abstraction. Using just parallel simulation techniques, real-time emulation performance of nearly 50 million packet transmissions per second is achieved on 128 processors for a network model consisting of about 20,000 nodes. Using both parallel simulation and abstraction techniques, real-time emulation performance of nearly 500 million packet transmissions per second is achieved on 128 processors for a network model consisting of about 200,000 nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GIPSE: streamlining the management of simulation on the grid

    Publication Year: 2005 , Page(s): 130 - 137
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    Although the grid allows the researcher to tap a vast amount of resources, the complexity involved in utilizing this power can make it unwieldy and time-consuming. The Grid Interface for Parameter Sweeps and Exploration (GIPSE) toolset aims to solve this issue by freeing users from script debugging, storage issues, and other minutiae involved in managing simulations on the grid. GIPSE, which interacts seamlessly with existing grid software, abstracts interactions with the grid to present a research-centric view of the process rather than the typical task-centric view. GIPSE offers an alternative interface to the grid that removes the need for application specific wrappers around parameter-driven simulations and provides an interface to build data sets for visualization. In this paper, the authors discussed how GIPSE bridges a critical gap between existing tools and management of the overarching data result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effective co-verification of IEEE 802.11a MAC/PHY combining emulation and simulation technology

    Publication Year: 2005 , Page(s): 138 - 146
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    This work presents a system architecture and effective co-verification methodologies for the IEEE 802.11a medium access control (MAC) layer/physical (PHY) layer implementation. The architecture modeling includes hardware/software partitioning of a total system based on timing measurements from the C/C++ and Verilog design, and analysis of real-time requirements specified in the standard. The system is built on an evaluation platform that contains a Xilinx Virtex-11 FPGA and an Altera Excalibur ARM922. The authors presented an approach that combines emulation and simulation for efficient debugging of the IEEE 802.11a wireless LAN using various verification technologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User centric walk: an integrated approach for modeling the browsing behavior of users on the Web

    Publication Year: 2005 , Page(s): 149 - 159
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    The performance evaluation of Web applications usually requires the analysis of sequences of user requests for specific Web pages. These sequences can be obtained, for example, by applying empirical methods (recording the real sequence of requests), or by applying a formal model for generating synthetic results. In this paper, we present our Web browsing model and its implementation as part of our novel user centric walk algorithm. By taking into account the hyperlink structure as well as the different user behavior on the Web, user centric walk allows us to generate accurate synthetic data that can be used instead of empirically obtained requests. Additionally, in this paper we show using empirical data that the probability of choosing some hyperlink from a given page as well as the probability of a user leaving a page without following a hyperlink is best characterized by a power-law. Finally, we show the flexibility and applicability of our model by performing the required correlations to empirical data, in order to validate our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.