System Maintenance:
There may be intermittent impact on performance while updates are in progress. We apologize for the inconvenience.
By Topic

IO latency hiding in pipelined architectures

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Siewert, S. ; Colorado Univ., Boulder, CO, USA

This paper reports upon development of a novel mathematical formalism for analyzing data pipelines. The method accounts for IO and CPU latencies in the stages of the data pipeline. An experimental pipeline was constructed using a video encoder, frame processing, and transport of the frames over an IP (Internet protocol) network. The pipelined architecture provides a method to overlap processing with DMA, encoding and network transport latency so that streams can be processed with optimal scalability. The model expectations were compared with experimental test results and found to be consistent. The model is therefore expected to provide a good estimate for the scalability of streaming video-on-demand systems. Video-on-demand is a rapidly growing service segment for entertainment, advertising, on-line education, and a myriad of emergent applications.

Published in:

Technical, Professional and Student Development Workshop, 2005 IEEE Region 5 and IEEE Denver Section

Date of Conference:

7-8 April 2005