Skip to Main Content
We present a method for boosting the resolution of 'target' frames of video using available supra-Nyquist information in surrounding frames during slow scene motion. Pixels in the frames surrounding a target frame were aligned to the target frame at subpixel resolution, by estimating translations of small upsampled image patches surrounding each pixel. This analysis was performed locally in order to account for the kinds of complex scene motion typical of human face imagery, motion which cannot often be effectively modeled using whole-image 2D affine transforms. Composite super-resolved images were built up from translated pixels, and missing pixels in the super-resolved pixel plane were imputed via adaptive-bandwidth bandpass interpolation and median filtering. Ambiguities in motion estimation due to the 'aperture problem' were systematically explored through visualization.