By Topic

Multistreaming of 3-D Scenes With Optimized Transmission and Rendering Scalability

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Dihong Tian ; Sch. of Electr. & Comput. Eng., Georgia Inst. of Technol., Savannah, GA ; AlRegib, G.

Three-dimensional (3D) graphic scenes require considerable network bandwidth to be transmitted and computing power to be rendered on a user's terminal. Toward high-quality display in real time, we propose a sender-driven mechanism for streaming 3D scenes in a resource-constrained environment. In doing so, objects are encoded into multiresolutions to provide transmission and rendering scalability, and a weighted distortion metric is developed to measure the quality of a scene rendered with multiresolution objects, modeling objects' unequal importance regarding display. To preserve the manipulation independency of multiple objects in data delivery while provide preferential treatment for different objects as well as different layers of each object, transmission of the objects is performed over multiple streams in a partially sequenced and partially reliable fashion. A rate-distortion optimization framework is developed, which determines an optimal level of reliability for every chunk of data in each stream, taking into account the rendering importance of the object, the distortion-rate performance of the data chunks, and the statistics of the network link. Compared with heuristical methods, simulation results show that the proposed framework maximizes the display quality of the scene while minimizing the amount of data that needs to be processed by the client's rendering engine

Published in:

Multimedia, IEEE Transactions on  (Volume:9 ,  Issue: 4 )