Skip to Main Content
We investigate the relationship between camera design and 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. To compute the 3D structure of a scene accurately from a moving vision sensor we need to be able to estimate the motion of the sensor from the recorded image information, a problem that has been well-studied By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the problem is linear and stable. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors. We develop design suggestions for a polydioptric camera especially suited for 3D photography, and we propose a linear algorithm utilizing this camera design to recover the structure of the scene.