Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. For technical support, please contact us at onlinesupport@ieee.org. We apologize for any inconvenience.
By Topic

Modeling, Analysis and Simulation of Computer Telecommunications Systems, 2003. MASCOTS 2003. 11th IEEE/ACM International Symposium on

Date 12-15 Oct. 2003

Filter Results

Displaying Results 1 - 25 of 46
  • Analysis of design alternatives for reverse proxy cache providers

    Publication Year: 2003 , Page(s): 316 - 323
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    Reverse proxy caches are used to provide scalability and improved latency to popular sites on the Web. In this paper we provide analytical performance models for distributed reverse proxy cache architectures, and study the trade-offs between various design alternatives. Specifically, we consider static and dynamic assignment of proxy cache nodes to Web sites, with different levels of sharing of proxy caches among Web sites. Innovative modeling contributions have been introduced to handle real design constraints, such as bounded cache size and bounded processing power, and different characteristics related to the hosted objects, including reference rates, popularity distributions and update rates. In the analysis we have modeled both system steady state as well as transient interaction between proxy sites and Web sites. We have found different trade-offs between various design alternatives depending on characteristics of the Web site workloads. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A packet-level simulation study of optimal Web proxy cache placement

    Publication Year: 2003 , Page(s): 324 - 333
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (402 KB) |  | HTML iconHTML  

    The Web proxy cache placement problem is a classical optimization problem: place N proxies within an internetwork so as to minimize the average user response time for retrieving Web objects. In this paper, we tackle this problem using packet-level ns2 network simulations. There are three main conclusions from our study. First, network-level effects (e.g., TCP dynamics, network congestion) can have a significant impact on user-level Web performance, and must not be overlooked when optimizing Web proxy cache placement. Second, cache filter effects can have a pronounced impact on the overall optimal caching solution. Third, small perturbations to the Web workload can produce quite different solutions for optimal proxy cache placement. This implies that robust, approximate solutions are more important than "perfect" optimal solutions. The paper provides several general heuristics for cache placement based on our packet-level simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracking the evolution of Web traffic: 1995-2003

    Publication Year: 2003 , Page(s): 16 - 25
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1523 KB) |  | HTML iconHTML  

    Understanding the nature and structure of Web traffic is essential for valid simulations of networking technologies that affect the end-to-end performance of HTTP connections. We provide data suitable for the construction of synthetic Web traffic generators and in doing so retrospectively examine the evolution of Web traffic. We use a simple and efficient analysis methodology based on the examination of only the TCP/IP headers of one-half (server-to-client) of the HTTP connection. We show the impact of HTTP protocol improvements such as persistent connections as well as modern content structure that reflect the influences of "banner ads," server load balancing, and content distribution networks. Lastly, we comment on methodological issues related to the acquisition of HTTP data suitable for performing these analyses, including the effects of trace duration and trace boundaries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Zone-based shortest positioning time first scheduling for MEMS-based storage devices

    Publication Year: 2003 , Page(s): 104 - 113
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (405 KB) |  | HTML iconHTML  

    Access latency to secondary storage devices is frequently a limiting factor in computer system performance. New storage technologies promise to provide greater storage densities at lower latencies than is currently obtainable with hard disk drives. MEMS-based storage devices use orthogonal magnetic or physical recording techniques and thousands of simultaneously active MEMS-based read-write tips to provide high-density low-latency nonvolatile storage. These devices promise seek times 10-20 times faster than hard drives, storage densities 10 times greater, and power consumption an order of magnitude lower. Previous research has examined data layout and request ordering algorithms that are analogs of those developed for hard drives. We present an analytical model of MEMS device performance that motivates a computationally simple MEMS-based request scheduling algorithm called ZSPTF, which has average response times comparable to shortest positioning time first (SPTF) but with response time variability comparable to circular scan (C-SCAN). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mapping peer behavior to packet-level details: a framework for packet-level simulation of peer-to-peer systems

    Publication Year: 2003 , Page(s): 71 - 78
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An active traffic splitter architecture for intrusion detection

    Publication Year: 2003 , Page(s): 238 - 241
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    Scaling network intrusion detection to high network speeds can be achieved using multiple sensors operating in parallel coupled with a suitable load balancing traffic splitter. This paper examines a splitter architecture that incorporates two methods for improving system performance: the first is the use of early filtering where a portion of the packets is processed on the splitter instead of the sensors. The second is the use of locality buffering, where the splitter reorders packets in a way that improves memory access locality on the sensors. Our experiments suggest that early filtering reduces the number of packets to be processed by 32%, giving a 8% increase in sensor performance, while locality buffers improve sensor performance by about 10%. Combined together, the two methods result in an overall improvement of 20% while the performance of the slowest sensor is improved by 14%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Twotowers 3.0: enhancing usability

    Publication Year: 2003 , Page(s): 188 - 193
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    TwoTowers is a software tool for the functional verification and performance evaluation of computer, communication and software systems represented through stochastic process algebra (SPA). In this paper we describe a novel version (3.0), in which the modeling language is no longer a pure SPA, but a SPA-based architectural description language called AEmilia. We show that Two Towers 3.0 improves on the previous version in terms of usability, because AEmilia hides most of the technicalities of SPA, and also in terms of efficiency, because a new algorithm for state space generation has been implemented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward scaling network emulation using topology partitioning

    Publication Year: 2003 , Page(s): 242 - 245
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    Scalability is the primary challenge to studying large complex network systems with network emulation. This paper studies topology partitioning, assigning disjoint pieces of the network topology across processors, as a technique to increase emulation capacity with increasing hardware resources. We develop methods to create partitions based on expected communication across the topology. Our evaluation methodology quantifies the communication overhead or efficiency of the resulting partitions. We implement and contrast three partitioning strategies in ModelNet, a large-scale network emulator, using different topologies and uniform communication patterns. Results show that standard graph partitioning algorithms can double the efficiency of the emulation for Internet-like topologies relative to random partitioning. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance benchmarking of dynamic Web technologies

    Publication Year: 2003 , Page(s): 250 - 253
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB) |  | HTML iconHTML  

    Many Web sites today dynamically generate responses when user requests are received. Dynamic content creation enables features that might otherwise not be possible. One disadvantage of dynamically generating Web content is the impact on Web server performance. In this paper, we experimentally evaluate the impact of three different dynamic content technologies on Web server performance. The results show that the overheads of dynamic content generation reduce the peak response rate of a Web server by a factor of 2 to 8, depending on the workload characteristics and the specific Web technologies used. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • P-sim: a simulator for peer-to-peer networks

    Publication Year: 2003 , Page(s): 213 - 218
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB) |  | HTML iconHTML  

    In the past few years, there has been intense interest in designing and studying peer-to-peer networks. Many initial measurement studies on current deployed peer-to-peer networks attempted to understand their performance. However, the large size and complex nature of these networks make it difficult to analyze their properties. Simulation of these peer-to-peer networks enables a methodical evaluation and analysis of their performance. However, to our knowledge, there is no tool for simulating different peer-to-peer network protocols for a comparative study. We present p-sim: a tool that can simulate peer-to-peer networks on top of representative Internet topologies, p-sim has several capabilities to provide an accurate model of real-world peer-to-peer networks, p-sim can scale to thousands of nodes and is extensible to simulate new peer-to-peer network protocols. In this paper, we discuss the capabilities of p-sim, its user-interface and two case-studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NAM: a network adaptable middleware to enhance response time of Web services

    Publication Year: 2003 , Page(s): 136 - 145
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (470 KB) |  | HTML iconHTML  

    Web services are an emerging software technology that employ XML to share and exchange data. They may serve as wrappers for legacy data sources, integrate multiple remote data sources, filter information by processing queries (function shipping), etc. With those that interact with an end user, a fast response time might be the difference between a frustrated and a satisfied user. A Web service may employ a loss-less compression technique, e.g., Zip, XMill, etc., to reduce the size of an XML message in order to enhance its transmission time. This saving might be outweighed by the overhead of compressing the output of a Web service at a server and decompressing it at a client. The primary contribution of this paper is NAM, a middleware that strikes a compromise between these two factors in order to enhance response time. NAM decides when to compress data based on the available client and server processor speeds, and network characteristics. When compared with today's common practice to transmit the output of a Web service uncompressed always, our experimental results show NAM either provides similar or significantly improved response times (at times more than 90% improvement) with Internet connections that offer bandwidths ranging between 80 to 100 Mbps. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Considering the energy consumption of mobile storage alternatives

    Publication Year: 2003 , Page(s): 36 - 45
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (478 KB) |  | HTML iconHTML  

    This paper is motivated by a simple question: what are the energy consumption characteristics of mobile storage alternatives? To answer this question, we are faced with a design space of multiple dimensions. Two important dimensions are the type of storage technologies and the type of file systems. In this paper, we explore some options along each of these two dimensions. We have constructed a logical-disk system, which can be configured to run on different storage technologies and to emulate the behavior of different file systems. As we explore these configuration options, we find that the energy behavior is determined by a complex interaction of three factors: the power management mechanism of the storage device, the distribution of idleness in the workload, and the file system strategies that attempt to exploit and bridge these first two factors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Creating realistic BGP models

    Publication Year: 2003 , Page(s): 64 - 70
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    Modeling the Internet infrastructure is a challenging endeavor. Complex interactions between protocols, increasing traffic volumes and the irregular structure of the Internet lead to demanding requirements for the simulation developer. These requirements include implementation detail, memory efficiency and scalability, among others. We introduce a simulation model of the Border Gateway Protocol that we call BGP++, which is built on the popular ns-2 simulation environment. A novel development approach is presented that incorporates the public domain routing software GNU Zebra in the simulator. Most of the original software functionality is retained, while the transition to the simulation environment required a manageable amount of effort. Moreover, the discussed design inherits much of the maturity of the original software, since the later is only minimally modified. We analyze BGP++ features and highlight its potential to provide significant aid in BGP research and modeling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DEVS today: recent advances in discrete event-based information technology

    Publication Year: 2003 , Page(s): 148 - 161
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    The DEVS modeling and simulation framework its fundamental concepts are discussed from the standpoint of discrete event information processing with an example drawn from recent experiments on infant cognition is reviewed. We also cover the DEVS formalism's atomic and coupled models and its hierarchical, modular composition approach. Some industrial applications of the methodology are discussed in depth to highlight the formalism's utility in the development of commercial and defense information technologies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimizing packet loss by optimizing OSPF weights using online simulation

    Publication Year: 2003 , Page(s): 79 - 86
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    In this paper, we present a scheme for minimizing packet loss in OSPF networks by optimizing link weights using online simulation. We have chosen packet loss rate in the network as the optimization metric as it is a good indicator of congestion and impacts the performance of the underlying applications. We have formulated packet loss rate in the network in terms of the link parameters, such as bandwidth and buffer space, and the parameters of the traffic demands. A GI/M/1/K queuing model has been used to compute the packet drop probability on a given link. The problem of optimizing OSPF weights is known to be NP-hard even for the case of a linear objective function Bernard Fortz and Mikkel Thorup (2000), We use online simulation (OLS) framework T. Ye et al. (2001) to search for a good link weight setting and as a tool for automatic network management. OLS uses fast, scalable recursive random search (RRS) algorithm to search the parameter space. Our results demonstrate that the RRS takes 50-90% fewer function evaluations as compared to the local search heuristic Bernard Fortz and Mikkel Thorup (2000) of to find a "good" link weight setting. The amount of improvement depends on the network topology, traffic conditions and optimization metric. We have simulated the proposed OSPF optimization scheme using ns and our results demonstrate improvements of the order of 30-60% in the total packet drop rate for the traffic and topologies considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantifying the effects of recent protocol improvements to standards-track TCP

    Publication Year: 2003 , Page(s): 226 - 229
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (413 KB) |  | HTML iconHTML  

    We assess the state-of-the-art in Internet congestion control and error recovery through a controlled study that considers the integration of standards-track TCP error recovery and both TCP and router-based congestion control. The goal is to examine and quantify the benefits of deploying standards-track technologies for Internet traffic as a function of the level of offered network load. We limit our study to the dominant and most stressful class of Internet traffic: bursty HTTP flows. Contrary to expectations and published prior work, we find that for HTTP flows (1) there is no clear benefit in using TCP SACK over TCP Reno, (2) unless congestion is a serious concern (i.e., unless average link utilization is above approximately 80%), there is little benefit to using Adaptive RED queue management, (3) above 80% link utilization there is potential benefit to using Adaptive RED with ECN marking, however, complex performance trade-offs exist and results are sensitive to parameter settings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using user interface event information in dynamic voltage scaling algorithms

    Publication Year: 2003 , Page(s): 46 - 55
    Cited by:  Papers (8)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    Increasingly, mobile computers use dynamic voltage scaling (DVS) to reduce CPU voltage and speed and thereby increase battery life. To determine how to change voltage and speed when responding to user interface events, we analyze traces of real user workloads. We evaluate a new heuristic for inferring when user interface tasks complete and find it is more efficient and nearly as effective as other approaches. We compare DVS algorithms and find that for a given performance level, the PACE algorithm uses the least energy and the Stepped algorithm uses the second least. We find that different types of user interface event (mouse movements, mouse clicks, and keystrokes) trigger tasks with significantly different CPU use, suggesting one should use different speeds for different event types. We also find differences in CPU use between categories of the same event type, e.g., between pressing spacebar and pressing enter, and between events of different applications. Thus, it is better to predict task CPU use based solely on tasks of the same category and application. However, energy savings from such improved predictions are small. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Disk built-in caches: evaluation on system performance

    Publication Year: 2003 , Page(s): 306 - 313
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (374 KB) |  | HTML iconHTML  

    Disk drive manufacturers are putting increasingly larger built-in caches into disk drives. Today, 2 MB buffers are common on low-end retail IDE/ATA drives, and some SCSI drives are now available with 16 MB. However, few published data are available to demonstrate that such large built-in caches can noticeably improve overall system performance. In this paper, we investigated the impact of the disk built-in cache on file system response time when the file system buffer cache becomes larger. Via detailed file system and disk system simulation, we arrive at three main conclusions: (1) With a reasonably-sized file system buffer cache (16 MB or more), there is very little performance benefit of using a built-in cache larger than 512 KB. (2) As a readahead buffer, the disk built-in cache provides noticeable performance improvements for workloads with read sequentiality, but has little positive effect on performance if there are more concurrent sequential workloads than cache segments. (3) As a writing cache, it also has some positive effects on some workloads, at the cost of reducing reliability. The disk drive industry is very cost-sensitive. Our research indicates that the current trend of using large built-in caches is unnecessary and a waste of money and power for most users. Disk manufacturers could use much smaller built-in caches to reduce the cost as well as power-consumption, without affecting performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scheduling parallel jobs with CPU and I/O resource requirements in cluster computing systems

    Publication Year: 2003 , Page(s): 336 - 343
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4315 KB)  

    This paper addresses the problem of an on-line coordinated allocation of processor and I/O resources in large-scale shared heterogeneous cluster computing systems. Most research in job scheduling study has focused solely on the allocation of processors to jobs. However, since I/O is also a critical resource for many jobs, the allocation of processor and I/O resources must be coordinated to allow the system to operate most effectively. To this end, we present an efficient job scheduling policy and study its performance under various system and workload parameters. We also compared the performance of the proposed policy with static space time sharing policy. The results show that the proposed policy performs substantially better than the static space time sharing policy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synthesizing representative I/O workloads using iterative distillation

    Publication Year: 2003 , Page(s): 6 - 15
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (398 KB) |  | HTML iconHTML  

    Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. The use of synthetic workloads addresses these limitations; however, synthetic workloads are accurate only if they share certain key properties with the production workload on which they are based (e.g., mean request size, read percentage). Unfortunately, we do not know which properties are "key " for a given workload and storage system. We have developed a tool, the Distiller, that automatically identifies the key properties ("attribute-values") of the workload. The Distiller then uses these attribute-values to generate a synthetic workload representative of the production workload. This paper presents the design and evaluation of the Distiller. We demonstrate how the Distiller finds representative synthetic workloads for simple artificial workloads and three production workload traces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal mobility-aware admission control in content delivery networks

    Publication Year: 2003 , Page(s): 234 - 237
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (317 KB) |  | HTML iconHTML  

    This paper addresses the problem of mobility management in content delivery networks (CDN). We introduce a CDN architecture where admission control is performed at mobility aware access routers. We formulate a Markov modulated Poisson decision process for access control that captures the bursty nature of data and packetized traffic together with the heterogeneity of multimedia services. The optimization of performance parameters, like the blocking probabilities and the overall utilization, is conducted and the structural properties of the optimal solutions are also studied. Heuristics are proposed to encompass the computational difficulties of the optimal solution when several classes of multimedia traffic are considered. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing flash crowds on the Internet

    Publication Year: 2003 , Page(s): 246 - 249
    Cited by:  Papers (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (286 KB) |  | HTML iconHTML  

    A flash crowd is a surge in traffic to a particular Web site that causes the site to be virtually unreachable. We present a model of flash crowd events and evaluate the performance of various multilevel caching techniques suitable for managing these events. By using well-dispersed caches and with judicious choice of replacement algorithms we show reductions in client response times by as much as a factor of 25. We also show that these caches eliminate the server and network hot spots by distributing the load over the entire network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On modeling and analyzing cache hierarchies using CASPER

    Publication Year: 2003 , Page(s): 182 - 187
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    The efficient use of cache hierarchies is crucial to the performance of uni-processor (desktop) and multiprocessor (enterprise) platforms. A plethora of research exists on the various structures and protocols that are of interest when considering caches. To enable the performance analysis of various cache hierarchies and associated allocation/coherence protocols, we developed a trace-driven simulation framework called CASPER - cache architecture simulation & performance exploration using refstreams. The CASPER simulation framework provides a rich set of features to model various cache organization alternatives, coherence protocols & optimizations, allocation/replacement policies, prefetching and partitioning techniques. In this paper, we describe the methodology behind CASPER, its detailed design and currently supported set of functionalities. CASPER has been used extensively for various research studies; a brief overview of some of these CASPER-based evaluation studies and their salient results will also be discussed. Based on its wide-ranging applicability, we believe CASPER is a useful addition to the performance analysis community for evaluating cache structures and hierarchies of various kinds. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Markovian performance analysis of a synchronous optical packet switch

    Publication Year: 2003 , Page(s): 254 - 257
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    Switch architectures applicable to fixed-length optical packet networks is studied and we compare their performance in terms of packet loss ratio. We propose analytical models of these switches (as discrete time Markov chains) and we compare their performance with those obtained by simulation, varying statistical properties of incoming traffic. We show that Markovian models of future optical architectures can be applied as a tool to studies of network design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An evaluation framework for active queue management schemes

    Publication Year: 2003 , Page(s): 200 - 206
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    Over the last decade numerous active queue management (AQM) schemes have been proposed in the literature. Many of these studies have been directed towards improving congestion control in best-effort networks. However, there has been a notable lack of standardised performance evaluation of AQM schemes. A rigorous study of the influence of parameterisation on specific schemes and the establishment of common comparison criteria is essential for objective evaluation of the different approaches. A framework for the detailed evaluation of AQM schemes is described in this paper. This provides a deceptively simple user interface whilst maximally exploiting relevant features of the NS2 simulator. Traffic models and network topologies are carefully chosen to characterise the target simulation environment. The credibility of the results obtained is enhanced by vigilant treatment of the simulation data. The impact of AQM schemes on global network performance is assessed using five carefully selected metrics. Thus, a comprehensive evaluation of AQM schemes may be achieved using the proposed framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.