Loading [MathJax]/extensions/MathMenu.js
Michael S. Langer - IEEE Xplore Author Profile

Showing 1-25 of 30 results

Filter Results

Show

Results

When a compact light source illuminates a horizontal shiny ground plane at an oblique angle, the resulting highlight is vertically oriented and highly elongated. We refer to such highlights as specular streaks. Specular streaks occur commonly on wet roadways, especially at night, for example the reflections of street lamps or car headlights. Here we present a 3D model of specular streaks seen by a...Show More
Gated cameras hold promise as an alternative to scanning LiDAR sensors with high-resolution 3D depth that is robust to back-scatter in fog, snow, and rain. Instead of sequentially scanning a scene and directly recording depth via the photon time-of-flight, as in pulsed LiDAR sensors, gated imagers encode depth in the relative intensity of a handful of gated slices, captured at megapixel resolution...Show More
In traditional depth from defocus (DFD) models, the blur kernel is a symmetric function whose width is proportional to the absolute distance in diopters between the scene point and the focal plane. A symmetric blur kernel implies a two-fold front-back ambiguity in the depth estimates, however. To resolve this ambiguity using only a single image of a scene, one typically introduces an asymmetry int...Show More
Lossy texture compression is increasingly used to reduce GPU memory and bandwidth consumption. However, as raised by recent studies, evaluating the quality of compressed textures is a difficult problem. Indeed using Peak Signal-to-Noise Ratio (PSNR) on texture images, like done in most applications, may not be a correct way to proceed. In particular, there is evidence that masking effects apply wh...Show More
Depth from defocus based methods rely on measuring the depth dependent blur at each pixel of the image. A core component in the defocus blur estimation process is the depth variant blur kernel. This blur kernel is often approximated as a Gaussian or pillbox kernel which only works well for small amount of blur. In general the blur kernel depends on the shape of the aperture and can vary a lot with...Show More
Different models for estimating depth from defocused images have been proposed over the years. Typically two differently defocused images are used by these models. Many of them work on the principle of transforming one or both of the images so that the transformed images become equivalent. One of the most common models is to estimate the relative blur between a pair of defocused images and compute...Show More
Depth from defocus (DFD) requires estimating the depth dependent defocus blur at every pixel. Several approaches for accomplishing this have been proposed over the years. For a pair of images this is done by modeling the defocus relationship between the two differently defocused images and for single defocused images by relying on the the properties of the point spread function and the characteris...Show More
Pictures taken with finite aperture lenses typically have out-of-focus regions. While such defocus blur is useful for creating photographic effects, it can also be used for depth estimation. In this paper, we look at different camera settings for Depth from Defocus (DFD), the conditions under which depth can be estimated unambiguously for those settings and optimality of different settings in term...Show More
An omni stereo pair of images provides depth information from stereo up to 360 degrees around a central observer. A method for synthesizing omni stereo video textures was recently introduced which was Based on blending of overlapping stereo videos that were filmed several seconds apart. While it produced loop able omni stereo videos that can be displayed up to 360 degrees around a viewer, ghosting...Show More
A panoramic stereo (or omnistereo) pair of images provides depth information from stereo up to 360 degrees around a central observer. Because omnistereo lenses or mirrors do not yet exist, synthesizing omnistereo images requires multiple stereo camera positions and baseline orientations. Recent omnistereo methods stitch together many small field of view images called slits which are captured by on...Show More
This paper evaluates the performance of different stereo formulations in the context of cluttered scenes with large number of binocular-monocular boundaries (i.e. occlusion boundaries). Three stereo methods employing three different constraints are considered. These are basic (Basic), uniqueness (KZ-uni), and visibility (KZ-vis). Scenes for the experiments are synthetically generated and some are ...Show More
This paper examines large partial occlusions in an image which occur near depth discontinuities when the foreground object is severely out of focus. We model these partial occlusions using matting, with the alpha value determined by the convolution of the blur kernel with a pinhole projection of the occluder. The main contribution is a method for removing the image contribution of the foreground o...Show More
We present a method to remove partial occlusion that arises from out-of-focus thin foreground occluders such as wires, branches, or a fence. Such partial occlusion causes the irradiance at a pixel to be a weighted sum of the radiances of a blurred foreground occluder and that of the background. The result is that the background component has lower contrast than it would if seen without the occlude...Show More
We present a focus-based method to recover the orientation of a textured planar surface patch from a single image. The method exploits the relationship between the orientation of equifocal (i.e. uniformly-blurred) contours in the image and the plane's tilt and slant angles. Compared to previous methods that determine planar orientation, we make fewer assumptions about the texture and remove the re...Show More
The human visual system is often able to recognize shading patterns and to discriminate them from surface reflectance patterns. To understand how this ability is possible, we investigate what makes shading patterns special. We study a statistical property of shading patterns, namely that they tend to be more elongated near intensity maxima. Second-order derivatives of shading and of surface height...Show More
When an observer moves in a 3D static scene, the resulting motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. It is well-known that the vector difference - or motion parallax - between nearby image motion field vectors points toward the direction of heading and so computing this vector difference can help in estimating the heading...Show More
When an observer moves in a 3D static scene, the motion field depends on the depth of the visible objects and on the observer's instantaneous translation and rotation. By computing the difference between nearby motion field vectors, the observer can estimate the direction of local motion parallax and in turn the direction of heading. It has recently been argued that, in 3D cluttered scenes such as...Show More
In this paper, we address the problem of finding depth from defocus in a fundamentally new way. Most previous methods have used an approximate model in which blurring is shift invariant and pixel area is negligible. Our model avoids these assumptions. We consider the area in the scene whose radiance is recorded by a pixel on the sensor, and relate the size and shape of that area to the scene's pos...Show More
When an observer moves through a rigid 3D scene, points that are near to the observer move with a different image velocity than points that are far away. The difference between image velocity vectors is the direction of motion parallax. This direction vector points towards the observer's translation direction. Hence estimates of the direction of motion parallax are useful for estimating the observ...Show More
This paper presents a novel method for the removal of unwanted image intensity due to occluding objects far from the plane of focus. Such occlusions may arise in scenes with large depth discontinuities, and result in image regions where both the occluding and background objects contribute to pixel intensities. The contribution of the occluding object's radiance is modeled by reverse projection, an...Show More
Previous methods for estimating the motion of an observer through a static scene require that image velocities can be measured. For the case of motion through a cluttered 3D scene, however, measuring optical flow is problematic because of the high density of depth discontinuities. This paper introduces a method for estimating motion through a cluttered 3D scene that does not measure velocities at ...Show More
Classical studies of measuring image motion by computer have concentrated on the case of optical flow, in which there is a unique velocity near each point of the image. In Langer and Mann (2001), we introduced a generalization of optical flow in which a range of parallel velocities can occur near each point in the image. Such image motion arises in many natural situations, such as camera motion in...Show More
Studies of image motion typically address motion categories on a case-by-case basis. Examples include a moving point, a moving contour, or a 2D optical flow field. The typical assumption made in these studies is that there is a unique velocity at each moving point in the image. In this paper we relax this assumption. We introduce a broader set of motion categories in which the set of motions at a ...Show More
Traditional light source modelling is concerned with specific types of light sources, the two most common of which are point sources and daylight. Little attempt has been made, however, to relate different types of sources to each other. For example, how may the lighting from an overcast sky be compared to that from a lamp? Having a theoretical framework to compare different types of light sources...Show More
A new surface radiance model for diffuse lighting is presented which incorporates shadows, interreflections, and surface orientation. An algorithm is presented that uses this model to compute shape-from-shading under diffuse lighting. The algorithm is tested on both synthetic and real images, and is found to perform more accurately than the only previous algorithm for this problem.Show More