By Topic

Queuing theoretic approach to server allocation problem in time-delay cloud computing systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Taichi Kusaka ; Graduate School of Information Science and Technology, Aichi Prefectural University, Nagakute-cho, Aichi 480-1198 Japan ; Takashi Okuda ; Tetsuo Ideguchi ; Xuejun Tian

Cloud computing is a popular computing model to support processing large volumetric data using clusters of commodity computers. It aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Because cloud applications have different composition, configuration, and deployment requirements, quantifying the performance of resource allocation policies and application scheduling algorithms, is important in cloud computing environments for different application and service models under varying load, network time-delay and system size. To obtain quantifying, the authors apply VCHS (Various Customers, Heterogeneous Servers) queuing systems.

Published in:

Teletraffic Congress (ITC), 2011 23rd International

Date of Conference:

6-9 Sept. 2011