Cart (Loading....) | Create Account
Close category search window

On the Conceptualization of Performance Evaluation of IaaS Services

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Li, Z. ; Z. Li is with the School of Computer Science, ANU and NICTA, Canberra, ACT 2601, Australia (e-mail: ; O'Brien, L. ; Zhang, H. ; Cai, R.

Cloud Computing has been increasingly accepted as a promising computing paradigm in industry, with one of the most common delivery models being Infrastructure as a Service (IaaS). An increasing number of providers have started to supply public IaaS services with different terminologies, definitions, and goals. As such, understanding the full scope of performance evaluation of candidate services would be crucial and beneficial for both service customers (e.g., cost-benefit analysis) and providers (e.g., direction of improvement). Given the numerous and diverse IaaS service features to be evaluated, a natural strategy is to implement different types of evaluation experiments separately. Unfortunately, it could be hard to fairly distinguish between different experimental types due to different environments and techniques that may be adopted by different evaluators. To overcome such obstacles, we have first established a novel taxonomy to help profile and clarify the nature of IaaS services performance evaluation, and then built a three-layer conceptual model to generalize the existing performance evaluation practices. Using relevant elements/classifiers in the taxonomy and conceptual model, evaluators can construct natural language-style descriptions and experimental design blueprints to outline the evaluation scope, and also to guide new evaluation implementations. In essence, the generated descriptions and blueprints abstractly define and characterize the actual evaluation work. This enables relatively fair and rational comparisons between different performance evaluations according to their abstract characteristics.

Published in:

Services Computing, IEEE Transactions on  (Volume:PP ,  Issue: 99 )

Need Help?

IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.