By Topic

Arbitrary view position and direction rendering for large-scale scenes

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Takahashi, T. ; Inst. of Ind. Sci., Tokyo Univ., Japan ; Kawasaki, H. ; Ikeuchi, K. ; Sakauchi, M.

This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes

Published in:

Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on  (Volume:2 )

Date of Conference:

2000