By Topic

Single-Image Super-Resolution Reconstruction via Learned Geometric Dictionaries and Clustered Sparse Coding

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Shuyuan Yang ; Key Laboratory of Intelligent Perception and Image Understanding, Ministry of Education, Xidian University, Xi''an, China ; Min Wang ; Chen, Y. ; Sun, Y.

Recently, single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interest. In this paper, we proposed a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR. Firstly, a large number of high-resolution (HR) image patches are randomly extracted from a set of example training images and clustered into several groups of “geometric patches,” from which the corresponding “geometric dictionaries” are learned to further sparsely code each local patch in a low-resolution image. A clustering aggregation is performed on the HR patches recovered by different dictionaries, followed by a subsequent patch aggregation to estimate the HR image. Considering that there are often many repetitive image structures in an image, we add a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details. Finally, the HR residual image is estimated by the proposed recovery method and compensated to better preserve the subtle details of the images. Some experiments test the proposed method on natural images, and the results show that the proposed method outperforms its counterparts in both visual fidelity and numerical measures.

Published in:

Image Processing, IEEE Transactions on  (Volume:21 ,  Issue: 9 )