Abstract:
Video streams, either in form of on-demand streaming or live streaming, usually have to be converted (i.e., transcoded) based on the characteristics (e.g., spatial resolu...Show MoreMetadata
Abstract:
Video streams, either in form of on-demand streaming or live streaming, usually have to be converted (i.e., transcoded) based on the characteristics (e.g., spatial resolution) of clients' devices. Transcoding is a computationally expensive operation, therefore, streaming service providers currently store numerous transcoded versions of the same video to serve different types of client devices. However, recent studies show that accessing video streams have a long tail distribution. That is, there are few popular videos that are frequently accessed while the majority of them are accessed infrequently. The idea we propose in this research is to transcode the infrequently accessed videos in a on-demand (i.e., lazy) manner. Due to the cost of maintaining infrastructure, streaming service providers (e.g., Netflix) are commonly using cloud services. However, the challenge in utilizing cloud services for video transcoding is how to deploy cloud resources in a cost-efficient manner without any major impact on the quality of video streams. To address the challenge, in this research, we present an architecture for on-demand transcoding of video streams. The architecture provides a platform for streaming service providers to utilize cloud resources in a cost-efficient manner and with respect to the Quality of Service (QoS) requirements of video streams. In particular, the architecture includes a QoS-aware scheduling component to efficiently map video streams to cloud resources, and a cost-efficient dynamic (i.e., elastic) resource provisioning policy that adapts the resource acquisition with respect to the video streaming QoS requirements.
Published in: 2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid)
Date of Conference: 16-19 May 2016
Date Added to IEEE Xplore: 21 July 2016
ISBN Information: