By Topic

Communications Quality and Reliability (CQR), 2010 IEEE International Workshop Technical Committee on

Date 8-10 June 2010

Filter Results

Displaying Results 1 - 16 of 16
  • Proposal and evaluation of Joint Rate Control for stored video streaming

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (755 KB) |  | HTML iconHTML  

    Rate control, which involves adapting the sending rate and video rate to network environments, is effective for stably streaming stored video over unmanaged IP networks. In this paper, we propose a novel rate control technique for optimizing the evaluation function defined as user-perceived video quality using an optimal control law in modern control theory. We call our method Joint Rate Control because it integrally controls both the sending rate and video rate. The results of numerical evaluation show that Joint Rate Control improved the evaluation function value by about 20%-50% from that of a conventional rate control method. We have demonstrated that Joint Rate Control optimizes the video quality; that is, this control method was the most effective in streaming video. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of Packet Loss Concealment in mobile environments with a two-state loss model

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    This paper presents and exercises a model to estimate the performance of Packet Loss Concealment (PLC) in wireless field environments. In particular, we estimate the probability of successfully receiving a random packet transmitted over an unreliable wireless link where a packet loss concealment algorithm such as the IETF's RFC 2198 standard schemes 1 and 2 are used. The model assumes a channel with a two-state loss system. Results generated are useful in understanding how packet loss concealment schemes can be designed to support high quality for real-time wireless applications such as VoIP on ad hoc networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A general parametric model for perceptual video quality estimation

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (138 KB) |  | HTML iconHTML  

    In this paper a general parametric model is proposed which provides estimation for the perceived quality of video, coded with different codecs, at any bit rate and display format. The proposed model takes into account video content, using an objective estimation of the spatial-temporal activity, based on the average SAD (Sum of Absolute Differences) for the clip. Studies were made for more than 2000 processed video clips, coded in MPEG-2 and H.264/AVC, in bit rate ranges from 25 kb/s to 12 Mb/s, in SD, VGA, CIF and QCIF display formats. The results shows that the proposed model fits very well to the perceived video quality, in any combination of codec, bit rate and display format. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New blind equalization technique for Constant Modulus Algorithm (CMA)

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB) |  | HTML iconHTML  

    Equalization plays an important role for the communication system receiver to correctly recover the symbol send by the transmitter, where the received signals may contain additive noise and intersymbol interference (ISI). Blind equalization is a technique of many equalization techniques at which the transmitted symbols over a communication channel can be recovered without the aid of training sequences, recently blind equalizers have a wide range of research interest since they do not require training sequence and extra bandwidth, but the main weaknesses of these approaches are their high computational complexity and slow adaptation, so different algorithms are presented to avoid this nature. The most popular blind algorithm which has a wide acceptance is the Constant Modulus Algorithm (CMA). The performance of CMA suffers from slow convergence rate or adaptation which corresponds to various transmission delays especially in wireless communication systems, which require higher speed and lower bandwidth. This paper introduces a new blind equalization technique, the Exponentially Weighted Step-size Recursive Least Squares Constant Modulus Algorithm (EXP-RLS-CMA), based upon the combination between the Exponentially Weighted Step-size Recursive Least Squares (EXP-RLS) algorithm and the Constant Modulus Algorithm (CMA), by providing several assumptions to obtain faster convergence rate to an optimal delay where the Mean Squared Error (MSE) is minimum, and so this selected algorithm can be implemented in digital system to improve the receiver performance. Simulations are presented to show the excellence of this technique, and the main parameters of concern to evaluate the performance are, the rate of convergence, the mean square error (MSE), and the average error versus different signal-to-noise ratios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An ANFIS-based hybrid quality prediction model for H.264 video over UMTS networks

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    The Quality of Service (QoS) of Universal Mobile Telecommunication System (UMTS) is severely affected by the losses occurring in Radio Link Control (RLC) due to high error probability. Therefore, for any video quality prediction model, it is important to model the radio-link loss behaviour appropriately. In addition, video content has an impact on video quality under same network conditions. The aim of this paper is to present video quality prediction models for objective, non-intrusive prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over UMTS networks. In order to characterize the QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS) is proposed to predict the video quality in terms of the Mean Opinion Score (MOS). ANFIS is well suited for video quality prediction over error prone and bandwidth restricted UMTS as it combines the advantages of neural networks and fuzzy systems. The loss models considered are 2-state Markov models with variable Mean Burst Lengths (MBLs) depicting the various UMTS scenarios. The proposed model is trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from the model. The work should help in the development of a reference-free video prediction model and Quality of Service (QoS) control methods for video over UMTS networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How to deal with bot scum in MMORPGs?

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    Nowadays, bots are becoming a critical issue for the online gaming world. Bots give unfair advantages, and are then considered as cheating and undesirable on game servers. Currently, CAPTCHA and human controls are the most commonly chosen strategies to catch bots. However, these methods are intrusive and complicated have proven to be inefficient due to the large servers' populations. Researchers have proposed various kinds of automated detection scheme. Yet, these proposals exhibit unpractical features, such as complexity or scalability issues, making the deployment on real systems problematic. We propose to study bots' and humans' client-server communication patterns, focusing on a one of most famous MMORPG called World of Warcraft. Intuiting that, for sake of efficiency and human-looking behavior, bots cannot constrain both packet timing and sizes, we propose a detection scheme that combines both parameters. We propose an online algorithm that processes our scheme on the fly as packets arrive. We evaluate the proposed scheme with real packet trace and observe that it can detect bots with small false alarm probability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic sharing mechanism for guaranteed availability in MPLS based networks

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (166 KB) |  | HTML iconHTML  

    This paper proposes an algorithm for allocating connections with bandwidth and availability requirements in a telecommunication network where the connections may share resources in their backup paths. The allocation is dynamic in the sense that allocation requests have random arrival times, source and destinations, and allocated connections have a random duration. The algorithm has been designed for core backbone networks with continuous bandwidth distribution, like for example MPLS networks. An efficient bandwidth utilization was obtained by an intelligent sharing mechanism that takes into account the properties of networks with continuous bandwidth allocation. The problem may be formulated as an NIP (Nonlinear Integer Programming) problem. However, due to the well known complexity and scalability limitations in solving this kind of problems, the solution is based on heuristic procedures. A performance comparison with previously published algorithms is carried out for some ”reference networks”, demonstrating a substantially better resource usage. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoE assessment in haptic media, sound and video transmission: Influences of network latency

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (594 KB) |  | HTML iconHTML  

    In this paper, we investigate the influences of the difference in network latency among haptic media, sound and video on the quality of experience (QoE) for a haptic media, sound and video transfer system. In the case where network delay jitter exists, we subjectively assess the operability of haptic interface device, sound output quality, video output quality, inter-stream synchronization quality, and comprehensive quality as QoE. We also evaluate the application-level quality of service (QoS), and we demonstrate that it is possible to estimate QoE parameters from QoS parameters with high accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Routing for reducing flow convergence on particular nodes

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (110 KB) |  | HTML iconHTML  

    Flow convergence on particular nodes causes various problems such as degradation in QoS. We describe a new routing method for reducing the maximum number of relay flows on the node that relays maximum flows (called “the maximum number of relay flows”). A full search is required for calculating the optimal algorithm of this problem. Thus, we created an approximate algorithm to calculate a near optimal solution using scalable computational costs. Moreover, when this method permits a call loss, the maximum number of relay flows can be reduced more. However, there is generally a trade-off relationship between the call loss rate and the maximum number of relay flows. Thus, we show the numerical relationship between the call loss rate and the maximum number of relay flows. Finally, we show, using a simulation in various network topology and traffic situations, that the maximum number of relay flows and the network delay can be reduced by our proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Radar chart: Scanning for high QoE in QoS dimensions

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1606 KB) |  | HTML iconHTML  

    The ongoing convergence of QoE (Quality of Experience) and QoS (Quality of Service) studies to give a thorough understanding of the end-user has posed numerous exciting possibilities for network and multimedia researchers. However, there is not yet a proper visualization tool that is able to map the many-to-one relationship between QoS metrics and QoE, leaving researchers speechless in the cacophony of traditional two-dimensional diagrams. Though mostly employed in qualitative analysis, we found that the radar chart, with a few tweaks, surprisingly suitable for the purpose. In this article, we present our adaptation of the radar chart, and demonstrate in a Voice-over-IP context its use in single- and cross-application performance analysis, application recommendation, and network diagnosis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compound TCP+ for fairness improvement among Compound TCP connections in a wireless LAN

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    In high-speed and long-distance networks, TCP NewReno, which is the most popular version of TCP, cannot achieve sufficient throughput because of the essential nature of the congestion control mechanism of TCP. Therefore, Compound TCP has been proposed. Compound TCP can achieve a considerably larger throughput than TCP NewReno in high-speed and long-distance networks. The congestion control mechanism of Compound TCP consists of loss-based and delay-based congestion controls. On the other hand, in a wireless LAN, the throughput among TCP connections becomes unfair because of the media access control used in a wireless LAN. Since Compound TCP has the same type of congestion control as TCP, it is expected that the same problem will occur among Compound TCP connections. In this papyer, we evaluate the performance of Compound TCP over a wireless LAN and show that the throughput among Compound TCP connections becomes unfair. Then, we propose Compound TCP+, which solves this problem, and show by simulation that Compound TCP+ connections share the bandwidth equally. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A whitelist approach to protect SIP servers from flooding attacks

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (349 KB) |  | HTML iconHTML  

    As SIP-based VoIP services are expected to slowly replace the traditional PSTN services, SIP servers are becoming potential targets of various attacks, one of which is flooding. In this paper, we argue that whitelist, as a strategy to defend against flooding attacks, can be more effective on a SIP server than a Web server. Since most SIP clients tend to have persistent connections with their server, and a whitelist is relatively easy to maintain. The methodology we propose to build a whitelist is capable of keeping the most comprehensive and up-to-date information about the legitimate SIP clients without any integration with a SIP server. We also study the impact of various attacks on a SIP server, and evaluate the effectiveness of our approach under the most powerful attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS-based adaptive playout scheduling based on the packet arrival statistics: Capturing local channel characteristics

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (119 KB) |  | HTML iconHTML  

    In IP networks, a voice quality suffers from the uncertainness of the packet-switch network characteristics. In particular, delay variation, the so-called jitter, can cause the packet-loss effect, resulting in sound degradation. An adaptive playout-buffer scheduling algorithm is used to overcome the packet-loss effect caused by the jitter. A distribution-based approach provides an analytical scheduling scheme that meets a desired quality of service. This approach highly depends on the method to calculate channel characteristics. In this paper, three different calculation methods are suggested. One of three gives a distribution-based approach the ability to capture local channel characteristics. As a result, a distribution-based approach becomes more efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation experiments to evaluate a new TCP congestion control supporting loss-fairness

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (585 KB) |  | HTML iconHTML  

    This paper proposes a new TCP congestion control supporting loss-fairness, which also contributes to improving RTT fairness. It has been well known that smaller RTT flows grab more bandwidth than larger ones when they compete on the shared link. This is called RTT unfairness and is caused by congestion control mechanism of the current TCP. To solve this problem, many protocols have been proposed. TCP-Libra is one of them, which formulates its window increase rate as a function of RTT and brings the same throughput irrespective of different RTT values. However, we indicate TCP-Libra's formulation holds only when the congestion window is halved upon packet losses similar to TCP-Reno. When we apply different (smarter) window control schemes, the formulation has to be modified accordingly. Therefore, in this paper, we consider a hybrid congestion control supporting RTT fairness which switches two modes according to buffering state at the bottleneck link. In decrement phase upon packet loss, it decreases window size to clear the buffering packets instead of halving the window size. Furthermore, we introduce the concept of loss-fairness for our window increase formulation, which is equivalent to RTT-fairness. Experiments are carried out to validate the proposed method by both implementation experiments and simulations, and much better performances are achieved against conventional methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online game QoE evaluation using paired comparisons

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    To satisfy players' gaming experience, there is a strong need for a technique that can measure a game's quality systematically, efficiently, and reliably. In this paper, we propose to use paired comparisons and probabilistic choice models to quantify online games' QoE under various network situations. The advantages of our methodology over the traditional MOS ratings are 1) the rating procedure is simpler thus less burden is on experiment participants, 2) it derives ratio-scale scores, and 3) it enables systematic verification of participants' inputs. As a demonstration, we apply our methodology to evaluate three popular FPS (first-person-shooter) games, namely, Alien Arena (Alien), Halo, and Unreal Tournament (UT), and investigate their network robustness. The results indicate that Halo performs the best in terms of their network robustness against packet delay and loss. However, if we take the degree of the games' sophistication into account, we consider that the robustness of UT against downlink delays should able be improved. We also show that our methodology can be a helpful tool for making decisions about design alternatives, such as how dead reckoning algorithms and time synchronization mechanisms should be implemented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An effective searching for multiple targets of information with shared computational resources in a large-scale network

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (846 KB) |  | HTML iconHTML  

    In this paper, we propose an efficient computational resource distribution scheduling scheme for information retrieval from multiple information sources in a large-scale network in the case where a user requests multiple target contents simultaneously. This scheme can restrain costs for contents searching and retrieval by limiting the effective area of connection from an information source to computational resources. Furthermore, we show the effectiveness of the scheme by computer simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.