Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Teletraffic Congress (ITC), 2010 22nd International

Date 7-9 Sept. 2010

Filter Results

Displaying Results 1 - 25 of 39
  • Author index

    Publication Year: 2010 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2010 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (28 KB)  
    Freely Available from IEEE
  • [Front and back cover]

    Publication Year: 2010 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (180 KB)  
    Freely Available from IEEE
  • Segmented P2P video-on-demand: Modeling and performance

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (195 KB) |  | HTML iconHTML  

    In a segmented peer-to-peer video-on-demand system the video file is split into a number of segments and the downloading of the video file proceeds in a more or less sequential manner from one segment to another (i.e., in stages). We present an analytical fluid model for such systems. Notably, for this model we derive an explicit condition when the system has a unique positive steady-state solution and that the viewing quality is acceptable. The analytical results are complemented with extensive simulations from the corresponding stochastic model, as well as traces from a more realistic BitTorrent simulator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal routing in parallel, non-observable queues and the price of anarchy revisited

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    We consider a network of parallel, non-observable queues and analyze the “price of anarchy”, an index measuring the worst-case performance loss of a decentralized system with respect to its centralized counterpart in presence of non-cooperative users. Our analysis is undertaken from the new point of view where the router has the memory of previous dispatching choices, which significantly complicates the nature of the problem. In the regime where the demands proportionally grow with the network capacity, we provide a tight lower bound on the socially-optimal response time and a tight upper bound on the price of anarchy by means of convex programming. Then, we exploit this result to show, by simulation, that the billiard routing scheme yields a response time which is remarkably close to our lower bound, implying that billiards minimize response time. To study the added-value of non-Bernoulli routers, we introduce the “price of forgetting” and prove that it is bounded from above by two, which is tight in heavy-traffic. Finally, other structural properties are derived numerically for the price of forgetting. These claim that the benefit of having memory in the router is independent of the network size and heterogeneity, while monotonically depending on the network load only. These properties yield simple product-forms well-approximating the socially-optimal response time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traffic capacity of large WDM passive optical networks

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (174 KB) |  | HTML iconHTML  

    As passive optical networks (PON) are increasingly deployed to provide high speed Internet access, it is important to understand their fundamental traffic capacity limits. The paper discusses performance models applicable to wavelength division multiplexing (WDM) EPONs and GPONs under the assumption that users access the fibre via optical network units equipped with tunable transmitters. The considered stochastic models are based on multiserver polling systems for which explicit analytical results are not known. A large system asymptotic, mean-field approximation is used to derive closed form solutions of these complex systems. Convergence of the mean field dynamics is proved in the case of a simple network configuration. Simulation results show that, for a realistic sized PON, the mean field approximation is accurate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS and channel aware packet bundling for VoIP and data traffic in multi-carrier cellular networks

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    We study the problem of multiple packet bundling in multi-carrier cellular networks for VoIP and data traffic. In a time-slotted system such as the cdma2000 1xEV-DO downlink, part of a time-slot may not be fully utilized, if the packet sizes are small, as in the case of real-time VoIP traffic. Packet bundling can alleviate such a problem by sharing a time slot among multiple users, as proposed in the EV-DO Revision A system. A recent revision, EV-DO Revision B, enables further increased system capacity through multiple carriers. However, the efficacy of packet bundling, especially across multiple carriers, is not well understood. When packets are bundled, the packet with the lowest channel quality dictates the modulation and coding format of the entire bundle, possibly wasting significant slot space due to unnecessary coding bits for the other packets. We first present how multiple carriers, together with packet bundling, improve spectral efficiency, but lose some of that benefit in realistic channel scenarios. Then we propose an effective heuristic algorithm to effectively exploit the multi-carrier with bundling system. We consider QoS requirements as well as time-varying and user-dependent channel conditions, and then show how channel utilization can be significantly improved while keeping delays of real-time packets limited. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal concurrent access strategies in mobile communication networks

    Publication Year: 2010 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (266 KB) |  | HTML iconHTML  

    Current wireless channel capacities are closely approaching the theoretical limit. Hence, further capacity improvements from complex signal processing schemes may only gain modest improvements. Multi-path communication approaches, however, combine the benefits of higher performance and reliability by exploiting the concurrent usage of multiple communication networks in areas that are covered by a multitude of wireless access networks. So far, little is known on how to effectively take advantage of this potential. Motivated by this, we consider parallel communication networks that handle two types of traffic: foreground and background. The foreground traffic stream of files should be directed to the network that requires the least time to transfer the file. The background streams are always directed to the same network. It is not clear up front how to select the appropriate network for each foreground stream. This may be performed by a static selection policy, based on the expected load of the networks. However, a dynamic policy that accounts for the network status may prove more elegant and better performing. We first propose a dynamic model that optimally assigns the foreground traffic to the available networks based on the number of fore- and background streams in both networks. However, in practice all traffic streams may be served by one application server. Thus, it may not be feasible to distinguish foreground from background traffic streams. This limitation is accounted for in our second, partial observation model that considers limited observability for dynamic network selection. We compare these static and dynamic models to each other and to the well-known Join the Shortest Queue (JSQ) model. The results are illustrated by extensive numerical experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fastrack for taming burstiness and saving power in multi-tiered systems

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Burstiness (i.e., sudden surges) in user demands in enterprise systems that operate under the multi-tiered paradigm is a common phenomenon that leads to overprovisioning: the system is configured with excess hardware to meet peak user demands, often resulting in excessive (and unnecessary) power costs. In this paper, we present Fastrack, a parameter-free algorithm for dynamic resource provisioning that uses simple statistics to promptly distill information about changes in workload burstiness. This information, coupled with the application's end-to-end response times and system bottleneck characteristics, guides resource allocation, which proves to be effective under a broad variety of application burstiness profiles and bottleneck scenarios. Extensive simulations illustrate Fastrack's robustness for consistently meeting predefined service level objectives while minimizing power usage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating the access link quality by active measurements

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (867 KB) |  | HTML iconHTML  

    The access link quality experienced by the end users depends on the amount of traffic and on the presence of network anomalies. Different techniques exist to detect anomalies, but little attention has been devoted to quantify the access link quality and to which extent network anomalies affect the end user's access link experience. We refer to this aspect as the impact factor of the anomaly, that we define as the percentage of affected destinations. In the ideal case, a node should continuously monitor all possible routes to detect any degradation in performance, but this is not practical in reality. In this paper we show how a node can estimate the quality of Internet access through a limited set of measurements. We initially study the user's access network to understand the typical features of its connectivity tree. Then, we define an unbiased estimator for the quality of access and we compute the minimum number of paths to monitor, so that the estimator achieves a desirable accuracy without knowing the underlying topology. We use real data to construct a network graph and we validate our solution by causing a large number of anomalies and by comparing the real and the estimated quality of access for all available end hosts. Our results show that the impact factor is a meaningful metric to evaluate the quality of Internet access. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quasi-stationary analysis for queues with temporary overload

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    Motivated by the high variation in transmission rates for document transfer in the Internet and file down loads from web servers, we study the buffer content in a queue with a fluctuating service rate. The fluctuations are assumed to be driven by an independent stochastic process. We allow the queue to be overloaded in some of the server states. In all but a few special cases, either exact analysis is not tractable, or the dependence of system performance in terms of input parameters (such as the traffic load) is hidden in complex or implicit characterizations. Various asymptotic regimes have been considered to develop insightful approximations. In particular, the so-called quasi-stationary approximation has proven extremely useful under the assumption of uniform stability. We refine the quasi-stationary analysis to allow for temporary instability, by studying the “effective system load” which captures the effect of accumulated work during periods in which the queue is unstable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A value-based framework for internet peering agreements

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    Internet Service Providers (ISPs) use complex peering policies, stipulating various rules for peering with other networks. Peering strategy is often considered a “black art” rather than science, and the outcome of a peering negotiation can depend on factors that are neither technical nor economic. Consequently, ISPs do not have a clear idea of the networks they should peer with, and the price they should demand/offer to ensure a stable peering link. We propose a quantitative framework for settlement-free and paid-peering links, based on the “value” of a peering link, i.e., the benefit that networks see from that link. We first study a solution where a centralized oracle determines a provably stable, optimal and fair price for a paid-peering link, based on perfect knowledge of the revenues and costs of each network. We next show that with perfect knowledge, the centralized solution can be implemented individually by the peering networks. We then study the effects of inaccurate estimation of peering value by the peering networks. Finally, we examine how value-based peering affects the density of peering links, the nature of end-to-end paths, and the profitability of various network types in the global Internet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DiffServ pricing games in multi-class queueing network models

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (262 KB) |  | HTML iconHTML  

    Introduction of differentiated services on the Internet has failed primarily due to many economic impediments. We focus on the provider competition aspect, and develop a multi-class queueing network game framework to study it. Each network service provider is modeled as a single-server multi-class queue. Providers post prices for various service classes. Traffic is elastic and there are multiple types of it, each traffic-type is sensitive to a different degree to Quality of Service (QoS). Arriving users choose a provider and a class for service. We study the pricing and service competition between the providers in a game-theoretic setting. We provide sufficient conditions for the existence of Nash equilibrium in the Bertrand (pricing) game between the multi-class queueing service providers. We also characterize the inefficiency (price of anarchy) due to strategic DiffServ pricing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application-based feature selection for Internet traffic classification

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    Recently, several statistical techniques using flow features have been proposed to address the problem of traffic classification. These methods achieve in general high recognition rates of the dominant applications and more random results for less popular ones. This stems from the selection process of the flow features, used as inputs of the statistical algorithm, which is biased toward those dominant applications. As a consequence, existing methods are difficult to adapt to the changing needs of network administrators that might want to quickly identify dominant applications like p2p or HTTP based applications or to zoom on specific less popular (in terms of bytes or flows) applications on a given site, which could be HTTP streaming or Gnutella for instance. We propose a new approach, aimed to address the above mentioned issues, based on logistic regression. Our technique can automatically select distinct, per-application features that best separate each application from the rest of the traffic. In addition, it has a low computation cost and needs only to inspect the first few packets of a flow to classify it, which means that it can be implemented in real time. We exemplify our method using two recent traces collected on two ADSL platforms of a large ISP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wireless multi-hop networks with stealing: Large buffer asymptotics

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Wireless networks equipped with CSMA are scheduled in a fully distributed manner. A disadvantage of such distributed control in multi-hop networks is the hidden node problem that causes the effect of stealing, in which a downstream node steals the channel from an upstream node with probability p. Aziz, Starobinski and Thiran have recently shown that the N-hop model with stealing is stable only in the case N = 3 and p ∈ (0, 1]. This 3-hop model can be modeled as a random walk in the quarter plane. We derive various asymptotic expressions for the stationary large buffer probabilities of the 3-hop model that capture the effect of p. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient computation of queueing delay at a network port from output link packet traces

    Publication Year: 2010 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB) |  | HTML iconHTML  

    Current Internet core routers provide enough buffer capacity at each output port to keep the link busy for 250 msec. to avoid disrupting TCP flows because of dropped packets. Since link speeds are rising much more quickly than the availability and cost-effectiveness of large high-speed memories, there is now significant interest in reducing these buffers. Queue inferencing (QI) - a passive, external method for calculating the time-dependent queue lengths and waiting times using start/end service event timestamps - is an ideal tool for studying the effects of such buffer size reductions because it can be applied in situ to packet traces collected from existing equipment carrying live traffic without any service disruptions. Although existing QI algorithms are too computationally expensive for this purpose, we observe that packet-sizes in a typical network trace are skewed towards a few “favored” sizes, and introduce two different methods for exploiting this property. First, since the output of a QI algorithm is a deterministic function of the service time sequence in a busy period, and our traces contain many repetitions of common packet-size sequences, we cache the output of the QI algorithm for later reuse. Second, we develop a new incremental QI algorithm, which can start from cached results for any prefix of the current busy period. Combining both methods leads us to structure the cache as a decision tree. Using network traces from WIDE project backbone to evaluate our method, we found that both the frequency of occurrence for particular busy periods and for busy-period lengths follow a decreasing power law, where busy periods lengths greater than 20 were very rare and none were greater than 70. Moreover, although we found that the size of the complete decision tree grows linearly with the length of the trace, we can restrict the tree to a finite number of “active” nodes (i.e., those nodes for which the probability of a visit is ab- - ove some threshold) and use simple constant-time bounds to handle the rare exceptional cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quality of signalling: A new concept for evaluating the performance of non-INVITE SIP transactions

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB) |  | HTML iconHTML  

    Providing Quality of Service (QoS) and Quality of Experience (QoE) in connection with media data like audio or video is of key importance for the evolution of future packet-based networks. Especially in telecommunications, the subjective service quality as perceived by the end user depends amongst other parameters on the setup success rate as well as on the delay for session setup, which is measured end to end. In this paper, we introduce the new concept of “Quality of Signaling” (QoSg) and present an analysis based on Session Initiation Protocol (SIP) networks, where messages traverse usually several hops (SIP proxies) and several connections. SIP introduces the concept of transactions that are managed hop-by-hop and form the basic building blocks for SIP signaling. We argue that the end-to-end service quality strongly depends on the performance of these transactions and provide a comprehensive analytic, simulative and measurement-based performance evaluation of single SIP transactions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficiency of caches for content distribution on the Internet

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (510 KB) |  | HTML iconHTML  

    Traffic engineering and an economical provisioning of bandwidth is crucial for network providers in times of high competition in broadband access networks. We investigate the efficiency of caching as an option to shorten end-to-end paths and delays while at the same time reducing traffic loads. The portion of HTTP based distribution of cacheable content on the Internet is increasing in recent time. In addition, the favourable effect of Zipf-like access pattern on caches is also confirmed for currently most popular Web sites with user generated content. Content delivery (CDN) and peer-to-peer (P2P) networks are distributing a major portion of IP traffic with different impact on caching. P2P traffic is subject to long transport paths although appropriate for caching in principle. CDNs are based on server infrastructures allowing for shorter paths on a global scale on top of network provider platforms. We give a brief overview of the options for deploying caches by content and network providers at different points in the interconnection, backbone or aggregation. The main part of the work focuses on the analysis of replacement strategies with regard to Zipf-like and fixed or slowly varying access pattern. A comparative evaluation shows that least recently used (LRU) essentially differs from caching strategies based on access statistics in terms of the achievable hit rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using local search for traffic engineering in switched Ethernet networks

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (330 KB) |  | HTML iconHTML  

    Large switched Ethernet networks are deployed in campus networks, data centers and metropolitan area networks to support various types of services. Congestion is usually handled by deploying more switches and installing higher bandwidth link bundles, although a better use of the existing infrastructure would allow to deal with congestion at lower cost. In this paper, we use constrained-based local search and the COMET language to develop an efficient traffic engineering technique that improves the use of the infrastructure by the spanning tree protocol. We evaluate the performance of our scheme by considering several types of network topologies and traffic matrices. We also compare the performance of our technique with the performance that a routing-based deployment supported by an IP traffic engineering technique would obtain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimum packet length masking

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (209 KB) |  | HTML iconHTML  

    Application level traffic classification has been addressed in demonstrated recently based on statistical features of packet flows. Among the most significant characteristics is packet length. Even ciphered flows leak information about their content through the sequence of packet length values. There are obvious ways to destroy such side information, e.g. by setting all packet at maximum allowed length. This approach could ential an extremely large overhead, which makes it impractical. There is room to investigate the optimal trade-off between overhead/complexity of packet length masking and suppression of information leakage about flow content through packet length values. In this work we characterize the optimum first order statistical padding technique which guarantees indistinguishability of different application flows. We also discuss how to account for subsequent packet length correlation. Numerical results are shown with reference to real network traffic traces, specifically flows of HTTP, POP3, SSH, and FTP (control session) traffic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inferring applications at the network layer using collective traffic statistics

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (896 KB) |  | HTML iconHTML  

    Operating, managing and securing networks require a thorough understanding of the demands placed on the network by the endpoints it interconnects, the characteristics of the traffic the endpoints generate, and the distribution of that traffic over the resources of the network infrastructure. A major differentiator in the types of resource required by traffic is the class of endpoint application that generates it. Service providers determine the application mix present in traffic via measurements, e.g., flow measurements furnished by routers. Previous work has shown that a fairly accurate determination of application type can be made from this data. However, protocol level information, such as TCP/UDP ports and other parts of the transport header, and also parts of the network header in some cases, may not be accessible due to the use of encryption or tunneling protocols by endpoints or gateways. Furthermore, the utility of ports as signifiers of application type has some limitations due to abuse and non-standard usage, amongst other reasons. These factors reduce the classification accuracy. In this paper, we propose a novel technique for inferring the distribution of application classes present in the aggregated traffic flows between endpoints, that exploits both the measured statistics of the traffic flows, and the spatial distribution of those flows across the network. Our method employs a two-step supervised model, where the bootstrapping step provides initial (inaccurate) inference on the traffic application classes, and the graph-based calibration step adjusts the initial inference through the collective spatial traffic distribution. In evaluations using real traffic flow measurements from a large ISP, we show how our method can accurately classify application types within aggregate traffic between endpoints, even without knowledge of ports and other traffic features. While the bootstrap estimate classifies the aggregates with 80% accuracy, incorporating spatia- - l distributions through calibration increases the accuracy to 92%, i.e., roughly halving the number of errors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of traffic surges in multi-class communication networks

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    In multi-class communication networks, traffic surges due to one class of users can significantly degrade the performance for other classes. During these transient periods, it is thus of crucial importance to implement priority mechanisms allowing the conservation of the quality of service experienced by the affected classes, while ensuring that the temporarily unstable class is not entirely neglected. In this paper, we examine - for a suitably-scaled set of parameters - the complex interaction occurring between several classes of traffic when an unstable class is penalized proportionally to its level of congestion. We characterize the evolution of the performance measures of the network from the moment the initial surge takes place until the system reaches its equilibrium. We show that, using a time-space-transition-scaling, the trajectories of the temporarily unstable class can be described by a differential equation, while those of the stable classes retain their stochastic nature. In particular, we show that the temporarily unstable class evolves at a time-scale which is much slower than that of the stable classes. Although the time-scales decouple, the dynamics of the temporarily unstable and the stable classes continue to influence one another. We further proceed to characterize the obtained differential equations for several simple network examples. In particular, the macroscopic asymptotic behavior of the unstable class allows us to gain important qualitative insights on how the bandwidth allocation affects performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the bias of BFS (Breadth First Search)

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB) |  | HTML iconHTML  

    Breadth First Search (BFS) and other graph traversal techniques are widely used for measuring large unknown graphs, such as online social networks. It has been empirically observed that incomplete BFS is biased toward high degree nodes. In contrast to more studied sampling techniques, such as random walks, the bias of BFS has not been characterized to date. In this paper, we quantify the degree bias of BFS sampling. In particular, we calculate the node degree distribution expected to be observed by BFS as a function of the fraction of covered nodes, in a random graph RG(pk) with a given (and arbitrary) degree distribution pk. Furthermore, we also show that, for RG(pk), all commonly used graph traversal techniques (BFS, DFS, Forest Fire, and Snowball Sampling) lead to the same bias, and we show how to correct for this bias. To give a broader perspective, we compare this class of exploration techniques to random walks that are well-studied and easier to analyze. Next, we study by simulation the effect of graph properties not captured directly by our model. We find that the bias gets amplified in graphs with strong positive assortativity. Finally, we demonstrate the above results by sampling the Facebook social network, and we provide some practical guidelines for graph sampling in practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On resource aware algorithms in epidemic live streaming

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Epidemic-style diffusion schemes have been previously proposed for achieving peer-to-peer live streaming. Their performance trade-offs have been extensively studied for homogeneous systems, where all peers have the same upload capacity. However, epidemic schemes designed for heterogeneous systems have not been completely understood yet. In this paper we focus on the peer selection process and propose a generic model that encompasses a large class of algorithms. The process is modeled as a combination of two functions, an aware one and an agnostic one. By means of simulations, we analyze the awareness-agnostism trade-offs on the peer selection process and the impact of the source distribution policy in non-homogeneous networks. We highlight that a fairness trade-off arises between the performance of heterogeneous peers as a function of the level of awareness, and the strong impact the source selection policy and bandwidth provisioning have on diffusion performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvements to LISP Mobile Node

    Publication Year: 2010 , Page(s): 1 - 8
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1507 KB) |  | HTML iconHTML  

    The Locator/Identifier Separation Protocol (LISP) is a new routing architecture for the Internet that separates local and global routing. It offers more flexibility to edge networks and has the potential to reduce the growths of the BGP routing tables. Recently, a concept for mobility in LISP (LISP Mobile Node, LISP-MN) was presented. We analyze LISP-MN and show that it needs double mapping lookups in all LISP gateways, leads to triangle routing under some conditions, and requires double encapsulation. We propose gradual improvements to LISP-MN that avoid these drawbacks under many conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.